Next Issue
Volume 14, March-2
Previous Issue
Volume 14, February-2
 
 

Electronics, Volume 14, Issue 5 (March-1 2025) – 234 articles

Cover Story (view full-size image): The manual monitoring of fluid infusion delays can put patients at risk, as quick responses from nursing staff are critical. LoRa networks are widely used today, and their application in healthcare can enhance medical care and save time for staff. This work presents a low-cost, real-time monitoring system for fluid infusion using a LoRa network. The system tracks fluid weight and drop count, ensuring accuracy through thorough evaluation. With its affordability and reliability, it can easily be implemented in healthcare settings, improving patient care and optimizing medical staff efficiency. This study highlights the potential of IoT solutions in modern healthcare. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 5010 KiB  
Article
Quad-Beam 4 × 2 Array Antenna for Millimeter-Wave 5G Applications
by Parveez Shariff Bhadravathi Ghouse, Tanweer Ali, Pallavi R. Mane, Sameena Pathan, Sudheesh Puthenveettil Gopi, Bal S. Virdee, Jaume Anguera and Prashant M. Prabhu
Electronics 2025, 14(5), 1056; https://doi.org/10.3390/electronics14051056 - 6 Mar 2025
Viewed by 476
Abstract
This article presents the design of a novel, compact, 4 × 2 planar-array antenna that provides quad-beam radiation in the broadside direction, and it enhances coverage and serviceability for millimeter-wave applications. The antenna utilizes a corporate (parallel) feed network to deliver equal power [...] Read more.
This article presents the design of a novel, compact, 4 × 2 planar-array antenna that provides quad-beam radiation in the broadside direction, and it enhances coverage and serviceability for millimeter-wave applications. The antenna utilizes a corporate (parallel) feed network to deliver equal power and phase to all elements. Non-uniform element spacing in the two orthogonal planes, exceeding 0.5λ1 (λ1 being the wavelength at 30 GHz), results in a quad-beam radiation pattern. Two beams are formed in the xz-plane and two in the yz-plane, oriented at angles of θ=±54°. However, this spacing leads to null radiation at the center and splits the radiation energy, reducing the overall gain. The measured half-power beamwidth (HPBW) is 30° in the xz-plane and 35° in the yz-plane, with X-polarization levels of −20.5 dB and −26 dB, respectively. The antenna achieves a bandwidth of 28.5–31.1 GHz and a peak gain of 10.6 dBi. Furthermore, increasing the aperture size enhances the gain and narrows the beamwidth by replicating the structure and tuning the feed network. These features make the proposed antenna suitable for 5G wireless communication systems. Full article
Show Figures

Figure 1

22 pages, 1744 KiB  
Article
Hybrid Long-Range–5G Multi-Sensor Platform for Predictive Maintenance for Ventilation Systems
by Praveen Mohanram and Robert H. Schmitt
Electronics 2025, 14(5), 1055; https://doi.org/10.3390/electronics14051055 - 6 Mar 2025
Viewed by 755
Abstract
In this paper, we present a multi-sensor platform for predictive maintenance featuring hybrid long-range (LoRa) and 5G connectivity. This hybrid approach combines LoRa’s low-power transmission for energy efficiency with 5G’s real-time data capabilities. The hardware platform integrates multiple sensors to monitor machine health [...] Read more.
In this paper, we present a multi-sensor platform for predictive maintenance featuring hybrid long-range (LoRa) and 5G connectivity. This hybrid approach combines LoRa’s low-power transmission for energy efficiency with 5G’s real-time data capabilities. The hardware platform integrates multiple sensors to monitor machine health parameters, with data analyzed on the device using pre-trained AI models to assess the machine’s condition. Inferences are transmitted via LoRa to the operator for maintenance scheduling, while a cloud application tracks and stores sensor data. Periodic sensor data bursts are sent via 5G to update the AI model, which is then delivered back to the platform through over-the-air (OTA) updates. We provide a comprehensive overview of the hardware architecture, along with an in-depth analysis of the data generated by the sensors, and its processing methodology. However, the data analysis and the software for ventilation control and its predictive capabilities are not the focus of this paper and are not presented. Full article
(This article belongs to the Special Issue 5G Mobile Telecommunication Systems and Recent Advances)
Show Figures

Figure 1

19 pages, 9660 KiB  
Article
An Efficient Synthetic Aperture Radar Interference Suppression Method Based on Image Domain Regularization
by Xuyang Ge, Xingdong Liang, Hang Li, Zhiyu Jiang, Yuan Zhang and Xiangxi Bu
Electronics 2025, 14(5), 1054; https://doi.org/10.3390/electronics14051054 - 6 Mar 2025
Viewed by 694
Abstract
Synthetic aperture radar (SAR) systems, as wideband radar systems, are inherently susceptible to interference signals within their operational frequency band, which significantly affects SAR signal processing and image interpretation. Recent studies have demonstrated that semiparametric methods (e.g., the RPCA method) exhibit excellent performance [...] Read more.
Synthetic aperture radar (SAR) systems, as wideband radar systems, are inherently susceptible to interference signals within their operational frequency band, which significantly affects SAR signal processing and image interpretation. Recent studies have demonstrated that semiparametric methods (e.g., the RPCA method) exhibit excellent performance in suppressing these interference signals. However, these methods predominantly focus on processing SAR’s raw echo data, which does not satisfy the sparsity requirements and entails extremely high computational complexity, complicating integration with imaging algorithms. This paper introduces an effective method for suppressing interference signals by leveraging the sparsity of the SAR image domain. It utilizes the sparsity of the interference signal in the two-dimensional frequency domain, following focusing processing, rather than relying on low-rank properties. This approach significantly reduces the computational complexity. Ultimately, the effectiveness and efficiency of the proposed algorithm are validated through experiments conducted with simulated and real SAR data. Full article
(This article belongs to the Special Issue New Challenges in Remote Sensing Image Processing)
Show Figures

Figure 1

26 pages, 6237 KiB  
Article
Generative AI in Education: Perspectives Through an Academic Lens
by Iulian Întorsureanu, Simona-Vasilica Oprea, Adela Bâra and Dragoș Vespan
Electronics 2025, 14(5), 1053; https://doi.org/10.3390/electronics14051053 - 6 Mar 2025
Cited by 2 | Viewed by 1754
Abstract
In this paper, we investigated the role of generative AI in education in academic publications extracted from Web of Science (3506 records; 2019–2024). The proposed methodology included three main streams: (1) Monthly analysis trends; top-ranking research areas, keywords and universities; frequency of keywords [...] Read more.
In this paper, we investigated the role of generative AI in education in academic publications extracted from Web of Science (3506 records; 2019–2024). The proposed methodology included three main streams: (1) Monthly analysis trends; top-ranking research areas, keywords and universities; frequency of keywords over time; a keyword co-occurrence map; collaboration networks; and a Sankey diagram illustrating the relationship between AI-related terms, publication years and research areas; (2) Sentiment analysis using a custom list of words, VADER and TextBlob; (3) Topic modeling using Latent Dirichlet Allocation (LDA). Terms such as “artificial intelligence” and “generative artificial intelligence” were predominant, but they diverged and evolved over time. By 2024, AI applications had branched into specialized fields, including education and educational research, computer science, engineering, psychology, medical informatics, healthcare sciences, general medicine and surgery. The sentiment analysis reveals a growing optimism in academic publications regarding generative AI in education, with a steady increase in positive sentiment from 2023 to 2024, while maintaining a predominantly neutral tone. Five main topics were derived from AI applications in education, based on an analysis of the most relevant terms extracted by LDA: (1) Gen-AI’s impact in education and research; (2) ChatGPT as a tool for university students and teachers; (3) Large language models (LLMs) and prompting in computing education; (4) Applications of ChatGPT in patient education; (5) ChatGPT’s performance in medical examinations. The research identified several emerging topics: discipline-specific application of LLMs, multimodal gen-AI, personalized learning, AI as a peer or tutor and cross-cultural and multilingual tools aimed at developing culturally relevant educational content and supporting the teaching of lesser-known languages. Further, gamification with generative AI involves designing interactive storytelling and adaptive educational games to enhance engagement and hybrid human–AI classrooms explore co-teaching dynamics, teacher–student relationships and the impact on classroom authority. Full article
(This article belongs to the Special Issue Techniques and Applications in Prompt Engineering and Generative AI)
Show Figures

Figure 1

22 pages, 3393 KiB  
Article
A Dynamic Spatio-Temporal Traffic Prediction Model Applicable to Low Earth Orbit Satellite Constellations
by Kexuan Liu, Yasheng Zhang and Shan Lu
Electronics 2025, 14(5), 1052; https://doi.org/10.3390/electronics14051052 - 6 Mar 2025
Viewed by 544
Abstract
Low Earth Orbit (LEO) constellations support the transmission of various communication services and have been widely applied in fields such as global Internet access, the Internet of Things, remote sensing monitoring, and emergency communication. With the surge in traffic volume, the quality of [...] Read more.
Low Earth Orbit (LEO) constellations support the transmission of various communication services and have been widely applied in fields such as global Internet access, the Internet of Things, remote sensing monitoring, and emergency communication. With the surge in traffic volume, the quality of user services has faced unprecedented challenges. Achieving accurate low Earth orbit constellation network traffic prediction can optimize resource allocation, enhance the performance of LEO constellation networks, reduce unnecessary costs in operation management, and enable the system to adapt to the development of future services. Ground networks often adopt methods such as machine learning (support vector machine, SVM) or deep learning (convolutional neural network, CNN; generative adversarial network, GAN) to predict future short- and long-term traffic information, aiming to optimize network performance and ensure service quality. However, these methods lack an understanding of the high-dynamics of LEO satellites and are not applicable to LEO constellations. Therefore, designing an intelligent traffic prediction model that can accurately predict multi-service scenarios in LEO constellations remains an unsolved challenge. In this paper, in light of the characteristics of high-dynamics and the high-frequency data streams of LEO constellation traffic, the authors propose a DST-LEO satellite-traffic prediction model (a dynamic spatio-temporal low Earth orbit satellite traffic prediction model). This model captures the implicit features among satellite nodes through multiple attention mechanism modules and processes the traffic volume and traffic connection/disconnection data of inter-satellite links via a multi-source data separation and fusion strategy, respectively. After splicing and fusing at a specific scale, the model performs prediction through the attention mechanism. The model proposed by the authors achieved a short-term prediction RMSE of 0.0028 and an MAE of 0.0018 on the Abilene dataset. For long-term prediction on the Abilene dataset, the RMSE was 0.0054 and the MAE was 0.0039. The RMSE of the short-term prediction on the dataset simulated by the internal low Earth orbit constellation business simulation system was 0.0034, and the MAE was 0.0026. For the long-term prediction, the RMSE reached 0.0029 and the MAE reached 0.0022. Compared with other time series prediction models, it decreased by 22.3% in terms of the mean squared error and 18.0% in terms of the mean absolute error. The authors validated the functions of each module within the model through ablation experiments and further analyzed the effectiveness of this model in the task of LEO constellation network traffic prediction. Full article
(This article belongs to the Special Issue Future Generation Non-Terrestrial Networks)
Show Figures

Figure 1

12 pages, 4424 KiB  
Article
Self-Calibration Method for Accurate Direct-Current Ratio Calibration
by Changwei Zhai, Jianting Zhao, Yunfeng Lu, Pengcheng Hu and Kunli Zhou
Electronics 2025, 14(5), 1051; https://doi.org/10.3390/electronics14051051 - 6 Mar 2025
Viewed by 406
Abstract
A self-calibration method is proposed for determining the current ratio for DC current ratio devices. To demonstrate the feasibility of the proposal, an experimental test with nominal current ratios of 10:1 100:1 and 1000:1 is performed, and a three-step calibration procedure is used: [...] Read more.
A self-calibration method is proposed for determining the current ratio for DC current ratio devices. To demonstrate the feasibility of the proposal, an experimental test with nominal current ratios of 10:1 100:1 and 1000:1 is performed, and a three-step calibration procedure is used: Firstly, eleven magnetic modulator-based DC current transducers (DCCTs) with a nominal current ratio of 10:1 are used to realize the self-calibration of a 10:1 current ratio through the reference current method; secondly, the eleven DCCTs are series-parallel to achieve the current ratio build-up from 10:1 to 100:1; finally, another DCCT with a 100:1 auxiliary current ratio is used to realize the 1000:1 current ratio calibration, combining the two steps before. The merit of the proposed self-calibration method is that the measurement does not need a high-precision current source or ammeter. Current ratio calibration based on this technology offers a lower-cost, more robust alternative to those using other devices. The experimental results show that the uncertainty of the calibration, mainly dominated by the noise and the stability of the magnetic modulator used, is within a few parts in 10−7 or better in the 1000 A range. Full article
Show Figures

Figure 1

13 pages, 3791 KiB  
Article
γ-Fe2O3-Based MEMS Gas Sensor for Propane Detection
by Xiang Gao, Ying Chen, Pengcheng Xu, Dan Zheng and Xinxin Li
Electronics 2025, 14(5), 1050; https://doi.org/10.3390/electronics14051050 - 6 Mar 2025
Viewed by 1524
Abstract
The selective detection of propane gas molecules using semiconductor gas sensors has always been a challenge within research. In this study, we successfully synthesized a γ-Fe2O3 nanomaterial with a selective catalytic effect on propane and loaded it onto a ZnO [...] Read more.
The selective detection of propane gas molecules using semiconductor gas sensors has always been a challenge within research. In this study, we successfully synthesized a γ-Fe2O3 nanomaterial with a selective catalytic effect on propane and loaded it onto a ZnO sensing material to construct a double-layer microsensor, which showed good sensing response characteristics in the detection of the refrigerant R290 (which is mainly propane). In addition, we also prepared a series of iron oxides, including nanomaterials such as α-Fe2O3, Fe3O4, and FeO, as well as γ-Fe2O3 materials with different specific surface areas obtained at various processing temperatures, and we carried out gas sensing research on R290. The results show that the γ-Fe2O3 material has a better sensitivity to R290, and the γ-Fe2O3 material calcined at 200 °C shows the best performance. Our results can provide a theoretical basis for the design and optimization of semiconductor gas sensors for alkane detection. Full article
Show Figures

Graphical abstract

11 pages, 4419 KiB  
Article
Investigation of Torque Ripple in Servo Motors with Different Magnet Geometries
by Hacı Dedecan, Engin Ayçiçek and Mustafa Gürkan Aydeniz
Electronics 2025, 14(5), 1049; https://doi.org/10.3390/electronics14051049 - 6 Mar 2025
Viewed by 489
Abstract
Servo motors are among the most efficient and precise performers within the category of permanent magnet synchronous motors. These motors stand out for their high power density, quiet operation, low maintenance, and wide operating speed range advantages. One of the disadvantages of these [...] Read more.
Servo motors are among the most efficient and precise performers within the category of permanent magnet synchronous motors. These motors stand out for their high power density, quiet operation, low maintenance, and wide operating speed range advantages. One of the disadvantages of these motors, which is also the subject of this study, is their high torque ripple. Torque ripple is critical in applications requiring precision, as it can affect operational performance and contribute to vibration and noise issues. Torque ripple can be reduced through design methods such as different winding layouts, slot openings, stator/rotor skewing, or pole offset. In this study, torque ripple of servo motors was investigated through various magnet geometry designs and analyses using the finite element method. Design and analysis studies were conducted for a reference servo motor, and alternative designs were obtained by modifying the rotor structure of the reference motor. In the studies conducted, it has been observed that the torque ripple, initially at 2.17 Nm, can be improved to as low as 1.23 Nm. This indicates that the torque ripple, which was initially at 3.75%, can be reduced to around 2.08%. However, performance losses may occur depending on the extent of improvement. Full article
(This article belongs to the Special Issue Advanced Design in Electrical Machines)
Show Figures

Figure 1

30 pages, 3046 KiB  
Review
A Survey of Advancements in Scheduling Techniques for Efficient Deep Learning Computations on GPUs
by Rupinder Kaur, Arghavan Asad, Seham Al Abdul Wahid and Farah Mohammadi
Electronics 2025, 14(5), 1048; https://doi.org/10.3390/electronics14051048 - 6 Mar 2025
Viewed by 963
Abstract
This comprehensive survey explores recent advancements in scheduling techniques for efficient deep learning computations on GPUs. The article highlights challenges related to parallel thread execution, resource utilization, and memory latency in GPUs, which can lead to suboptimal performance. The surveyed research focuses on [...] Read more.
This comprehensive survey explores recent advancements in scheduling techniques for efficient deep learning computations on GPUs. The article highlights challenges related to parallel thread execution, resource utilization, and memory latency in GPUs, which can lead to suboptimal performance. The surveyed research focuses on novel scheduling policies to improve memory latency tolerance, exploit parallelism, and enhance GPU resource utilization. Additionally, it explores the integration of prefetching mechanisms, fine-grained warp scheduling, and warp switching strategies to optimize deep learning computations. These techniques demonstrate significant improvements in throughput, memory bank parallelism, and latency reduction. The insights gained from this survey can guide researchers, system designers, and practitioners in developing more efficient and powerful deep learning systems on GPUs. Furthermore, potential future research directions include advanced scheduling techniques, energy efficiency considerations, and the integration of emerging computing technologies. Through continuous advancement in scheduling techniques, the full potential of GPUs can be unlocked for a wide range of applications and domains, including GPU-accelerated deep learning, task scheduling, resource management, memory optimization, and more. Full article
(This article belongs to the Special Issue Emerging Applications of FPGAs and Reconfigurable Computing System)
Show Figures

Figure 1

16 pages, 716 KiB  
Article
Efficient Graph Representation Learning by Non-Local Information Exchange
by Ziquan Wei, Tingting Dan, Jiaqi Ding and Guorong Wu
Electronics 2025, 14(5), 1047; https://doi.org/10.3390/electronics14051047 - 6 Mar 2025
Viewed by 452
Abstract
Graphs are an effective data structure for characterizing ubiquitous connections as well as evolving behaviors that emerge in inter-wined systems. Limited by the stereotype of node-to-node connections, learning node representations is often confined in a graph diffusion process where local information has been [...] Read more.
Graphs are an effective data structure for characterizing ubiquitous connections as well as evolving behaviors that emerge in inter-wined systems. Limited by the stereotype of node-to-node connections, learning node representations is often confined in a graph diffusion process where local information has been excessively aggregated, as the random walk of graph neural networks (GNN) explores far-reaching neighborhoods layer-by-layer. In this regard, tremendous efforts have been made to alleviate feature over-smoothing issues such that current backbones can lend themselves to be used in a deep network architecture. However, compared to designing a new GNN, less attention has been paid to underlying topology by graph re-wiring, which mitigates not only flaws of the random walk but also the over-smoothing risk incurred by reducing unnecessary diffusion in deep layers. Inspired by the notion of non-local mean techniques in the area of image processing, we propose a non-local information exchange mechanism by establishing an express connection to the distant node, instead of propagating information along the (possibly very long) original pathway node-after-node. Since the process of seeking express connections throughout a graph can be computationally expensive in real-world applications, we propose a re-wiring framework (coined the express messenger wrapper) to progressively incorporate express links in a non-local manner, which allows us to capture multi-scale features without using a very deep model; our approach is thus free of the over-smoothing challenge. We integrate our express messenger wrapper with existing GNN backbones (either using graph convolution or tokenized transformer) and achieve a new record on the Roman-empire dataset as well as in terms of SOTA performance on both homophilous and heterophilous datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence in Graphics and Images)
Show Figures

Figure 1

23 pages, 7710 KiB  
Article
Immersive Interaction for Inclusive Virtual Reality Navigation: Enhancing Accessibility for Socially Underprivileged Users
by Jeonghyeon Kim, Jung-Hoon Ahn and Youngwon Kim
Electronics 2025, 14(5), 1046; https://doi.org/10.3390/electronics14051046 - 6 Mar 2025
Viewed by 661
Abstract
Existing virtual reality (VR) street view and 360-degree road view applications often rely on complex controllers or touch interfaces, which can hinder user immersion and accessibility. These challenges are particularly pronounced for under-represented populations, such as older adults and individuals with limited familiarity [...] Read more.
Existing virtual reality (VR) street view and 360-degree road view applications often rely on complex controllers or touch interfaces, which can hinder user immersion and accessibility. These challenges are particularly pronounced for under-represented populations, such as older adults and individuals with limited familiarity with digital devices. Such groups frequently face physical or environmental constraints that restrict their ability to engage in outdoor activities, highlighting the need for alternative methods of experiencing the world through virtual environments. To address this issue, we propose a VR street view application featuring an intuitive, gesture-based interface designed to simplify user interaction and enhance accessibility for socially disadvantaged individuals. Our approach seeks to optimize digital accessibility by reducing barriers to entry, increasing user immersion, and facilitating a more inclusive virtual exploration experience. Through usability testing and iterative design, this study evaluates the effectiveness of gesture-based interactions in improving accessibility and engagement. The findings emphasize the importance of user-centered design in fostering an inclusive VR environment that accommodates diverse needs and abilities. Full article
Show Figures

Figure 1

31 pages, 875 KiB  
Article
Hierarchical Traffic Engineering in 3D Networks Using QoS-Aware Graph-Based Deep Reinforcement Learning
by Robert Kołakowski, Lechosław Tomaszewski, Rafał Tępiński and Sławomir Kukliński
Electronics 2025, 14(5), 1045; https://doi.org/10.3390/electronics14051045 - 6 Mar 2025
Viewed by 520
Abstract
Ubiquitous connectivity is envisioned through the integration of terrestrial (TNs) and non-terrestrial networks (NTNs). However, NTNs face multiple routing and Quality of Service (QoS) provisioning challenges due to the mobility of network nodes. Distributed Software-Defined Networking (SDN) combined with Multi-Agent Deep Reinforcement Learning [...] Read more.
Ubiquitous connectivity is envisioned through the integration of terrestrial (TNs) and non-terrestrial networks (NTNs). However, NTNs face multiple routing and Quality of Service (QoS) provisioning challenges due to the mobility of network nodes. Distributed Software-Defined Networking (SDN) combined with Multi-Agent Deep Reinforcement Learning (MADRL) is widely used to introduce programmability and intelligent Traffic Engineering (TE) in TNs, yet applying DRL to NTNs is hindered by frequently changing state sizes, model scalability, and coordination issues. This paper introduces 3DQR, a novel TE framework that combines hierarchical multi-controller SDN, hierarchical MADRL based on Graph Neural Networks (GNNs), and network topology predictions for QoS path provisioning, effective load distribution, and flow rejection minimisation in future 3D networks. To enhance SDN scalability, introduced are metrics and path operations abstractions to facilitate domain agents coordination by the global agent. To the best of the authors’ knowledge, 3DQR is the first routing scheme to integrate MADRL and GNNs for optimising centralised routing and path allocation in SDN-based 3D mobile networks. The evaluations show up to a 14% reduction in flow rejection rate, a 50% improvement in traffic distribution, and effective QoS class prioritisation compared to baseline techniques. 3DQR also exhibits strong transfer capabilities, giving consistent performance gains in previously unseen environments. Full article
(This article belongs to the Special Issue Future Generation Non-Terrestrial Networks)
Show Figures

Figure 1

25 pages, 2508 KiB  
Article
OVSLT: Advancing Sign Language Translation with Open Vocabulary
by Ai Wang, Junhui Li, Wuyang Luan and Lei Pan
Electronics 2025, 14(5), 1044; https://doi.org/10.3390/electronics14051044 - 6 Mar 2025
Viewed by 525
Abstract
Hearing impairments affect approximately 1.5 billion individuals worldwide, highlighting the critical need for effective communication tools between deaf and hearing populations. Traditional sign language translation (SLT) models predominantly rely on gloss-based methods, which convert visual sign language inputs into intermediate gloss sequences before [...] Read more.
Hearing impairments affect approximately 1.5 billion individuals worldwide, highlighting the critical need for effective communication tools between deaf and hearing populations. Traditional sign language translation (SLT) models predominantly rely on gloss-based methods, which convert visual sign language inputs into intermediate gloss sequences before generating textual translations. However, these methods are constrained by their reliance on extensive annotated data, susceptibility to error propagation, and inadequate handling of low-frequency or unseen sign language vocabulary, thus limiting their scalability and practical application. Drawing upon multimodal translation theory, this study proposes the open-vocabulary sign language translation (OVSLT) method, designed to overcome these challenges by integrating open-vocabulary principles. OVSLT introduces two pivotal modules: Enhanced Caption Generation and Description (CGD), and Grid Feature Grouping with Advanced Alignment Techniques. The Enhanced CGD module employs a GPT model enhanced with a Negative Retriever and Semantic Retrieval-Augmented Features (SRAF) to produce semantically rich textual descriptions of sign gestures. In parallel, the Grid Feature Grouping module applies Grid Feature Grouping, contrastive learning, feature-discriminative contrastive loss, and balanced region loss scaling to refine visual feature representations, ensuring robust alignment with textual descriptions. We evaluated OVSLT on the PHOENIX-14T and CSLDaily datasets. The results demonstrated a ROUGE score of 29.6% on the PHOENIX-14T dataset and 30.72% on the CSLDaily dataset, significantly outperforming existing models. These findings underscore the versatility and effectiveness of OVSLT, showcasing the potential of open-vocabulary approaches to surpass the limitations of traditional SLT systems and contribute to the evolving field of multimodal translation. Full article
Show Figures

Figure 1

23 pages, 8182 KiB  
Article
Sound Source Localization Using Deep Learning for Human–Robot Interaction Under Intelligent Robot Environments
by Hong-Min Jo, Tae-Wan Kim and Keun-Chang Kwak
Electronics 2025, 14(5), 1043; https://doi.org/10.3390/electronics14051043 - 6 Mar 2025
Viewed by 656
Abstract
In this paper, we propose Sound Source Localization (SSL) using deep learning for Human–Robot Interaction (HRI) under intelligent robot environments. The proposed SSL method consists of three steps. The first step preprocesses the sound source to minimize noise and reverberation in the robotic [...] Read more.
In this paper, we propose Sound Source Localization (SSL) using deep learning for Human–Robot Interaction (HRI) under intelligent robot environments. The proposed SSL method consists of three steps. The first step preprocesses the sound source to minimize noise and reverberation in the robotic environment. Excitation source information (ESI), which contains only the original components of the sound source, is extracted from a sound source in a microphone array mounted on a robot to minimize background influence. Here, the linear prediction residual is used as the ESI. Subsequently, the cross-correlation signal between each adjacent microphone pair is calculated by using the ESI signal of each sound source. To minimize the influence of noise, a Generalized Cross-Correlation with the phase transform (GCC-PHAT) algorithm is used. In the second step, we design a single-channel, multi-input convolutional neural network that can independently learn the calculated cross-correlation signal between each adjacent microphone pair and the location of the sound source using the time difference of arrival. The third step classifies the location of the sound source after training with the proposed network. Previous studies have primarily used various features as inputs and stacked them into multiple channels, which made the algorithm complex. Furthermore, multi-channel inputs may not be sufficient to clearly train the interrelationship between each sound source. To address this issue, the cross-correlation signal between each sound source alone is used as the network input. The proposed method was verified on the Electronics and Telecommunications Research Institute-Sound Source Localization (ETRI-SSL) database acquired from the robotic environment. The experimental results revealed that the proposed method showed an 8.75% higher performance in comparison to the previous works. Full article
(This article belongs to the Special Issue Control and Design of Intelligent Robots)
Show Figures

Figure 1

32 pages, 2960 KiB  
Article
Comparing Application-Level Hardening Techniques for Neural Networks on GPUs
by Giuseppe Esposito, Juan-David Guerrero-Balaguera, Josie E. Rodriguez Condia and Matteo Sonza Reorda
Electronics 2025, 14(5), 1042; https://doi.org/10.3390/electronics14051042 - 6 Mar 2025
Viewed by 593
Abstract
Neural networks (NNs) are essential in advancing modern safety-critical systems. Lightweight NN architectures are deployed on resource-constrained devices using hardware accelerators like Graphics Processing Units (GPUs) for fast responses. However, the latest semiconductor technologies may be affected by physical faults that can jeopardize [...] Read more.
Neural networks (NNs) are essential in advancing modern safety-critical systems. Lightweight NN architectures are deployed on resource-constrained devices using hardware accelerators like Graphics Processing Units (GPUs) for fast responses. However, the latest semiconductor technologies may be affected by physical faults that can jeopardize the NN computations, making fault mitigation crucial for safety-critical domains. The recent studies propose software-based Hardening Techniques (HTs) to address these faults. However, the proposed fault countermeasures are evaluated through different hardware-agnostic error models neglecting the effort required for their implementation and different test benches. Comparing application-level HTs across different studies is challenging, leaving it unclear (i) their effectiveness against hardware-aware error models on any NN and (ii) which HTs provide the best trade-off between reliability enhancement and implementation cost. In this study, application-level HTs are evaluated homogeneously and independently by performing a study on the feasibility of implementation and a reliability assessment under two hardware-aware error models: (i) weight single bit-flips and (ii) neuron bit error rate. Our results indicate that not all HTs suit every NN architecture, and their effectiveness varies depending on the evaluated error model. Techniques based on the range restriction of activation function consistently outperform others, achieving up to 58.23% greater mitigation effectiveness while keeping the introduced overhead at inference time low while requiring a contained effort in their implementation. Full article
Show Figures

Figure 1

12 pages, 4054 KiB  
Article
Low-Frequency Communication Based on Rydberg-Atom Receiver
by Yipeng Xie, Mingwei Lei, Jianquan Zhang, Wenbo Dong and Meng Shi
Electronics 2025, 14(5), 1041; https://doi.org/10.3390/electronics14051041 - 6 Mar 2025
Viewed by 495
Abstract
Rydberg-atom receivers have developed rapidly with increasing sensitivity. However, studies on their application in low-frequency electric fields remain limited. In this work, we demonstrate low-frequency communication using an electrode-embedded atom cell and a whip antenna without the need for a low-noise amplifier (LNA). [...] Read more.
Rydberg-atom receivers have developed rapidly with increasing sensitivity. However, studies on their application in low-frequency electric fields remain limited. In this work, we demonstrate low-frequency communication using an electrode-embedded atom cell and a whip antenna without the need for a low-noise amplifier (LNA). Three modulations—binary phase-shift keying (BPSK), on–off keying (OOK), and two-frequency shift keying (2FSK)—were employed for communication using a Rydberg-atom receiver operating near 100 kHz. The signal-to-noise ratio (SNR) of the modulated low-frequency signal received by Rydberg atoms was measured at various emission voltages. Additionally, we demonstrated the in-phase and quadrature (IQ) constellation diagram, error vector magnitude (EVM), and eye diagram of the demodulated signal at different symbol rates. The EVM values were measured to be 8.8% at a symbol rate of 2 kbps, 9.4% at 4 kbps, and 13.7% at 8 kbps. The high-fidelity digital color image transmission achieved a peak signal-to-noise ratio (PSNR) of 70 dB. Our results demonstrate the feasibility of a Rydberg-atom receiver for low-frequency communication applications. Full article
(This article belongs to the Topic Quantum Wireless Sensing)
Show Figures

Figure 1

24 pages, 4633 KiB  
Article
Load Equipment Segmentation and Assessment Method Based on Multi-Source Tensor Feature Fusion
by Xiaoli Zhang, Congcong Zhao, Wenjie Lu and Kun Liang
Electronics 2025, 14(5), 1040; https://doi.org/10.3390/electronics14051040 - 5 Mar 2025
Viewed by 623
Abstract
The state monitoring of power load equipment plays a crucial role in ensuring its normal operation. However, in densely deployed environments, the target equipment often exhibits low clarity, making real-time warnings challenging. In this study, a load equipment segmentation and assessment method based [...] Read more.
The state monitoring of power load equipment plays a crucial role in ensuring its normal operation. However, in densely deployed environments, the target equipment often exhibits low clarity, making real-time warnings challenging. In this study, a load equipment segmentation and assessment method based on multi-source tensor feature fusion (LSA-MT) is proposed. First, a lightweight residual block based on the attention mechanism is introduced into the backbone network to emphasize key features of load devices and enhance target segmentation efficiency. Second, a 3D edge detail feature perception module is designed to facilitate multi-scale feature fusion while preserving boundary detail features of different devices, thereby improving local recognition accuracy. Finally, tensor decomposition and reorganization are employed to guide visual feature reconstruction in conjunction with equipment monitoring images, while tensor mapping of equipment monitoring data is utilized for automated fault classification. The experimental results demonstrate that LSE-MT produces visually clearer segmentations compared to models such as the classic UNet++ and the more recent EGE-UNet when segmenting multiple load devices, achieving Dice and mIoU scores of 92.48 and 92.90, respectively. Regarding classification across the four datasets, the average accuracy can reach 92.92%. These findings fully demonstrate the effectiveness of the LSA-MT method in load equipment fault alarms and grid operation and maintenance. Full article
Show Figures

Figure 1

34 pages, 10149 KiB  
Article
Enhancing Blended Learning Evaluation Through a Blockchain and Searchable Encryption Approach
by Fei Ren, Bo Zhao, Jun Wang, Ju-Xiang Zhou and Tian-Yu Xie
Electronics 2025, 14(5), 1039; https://doi.org/10.3390/electronics14051039 - 5 Mar 2025
Viewed by 502
Abstract
With the rapid development of information technology, blended learning has become a crucial aspect of modern education. However, the fragmented use of various teaching platforms, such as Xuexitong and Rain Classroom, has led to the dispersion of teaching data. This not only increases [...] Read more.
With the rapid development of information technology, blended learning has become a crucial aspect of modern education. However, the fragmented use of various teaching platforms, such as Xuexitong and Rain Classroom, has led to the dispersion of teaching data. This not only increases the cognitive load on teachers and students but also hinders the systematic recording of teaching activities and learning outcomes. Moreover, existing blended learning evaluation systems exhibit significant shortcomings in large-scale data storage and secure sharing. To address these issues, this study designs a blended teaching evaluation management system based on blockchain and searchable encryption. First, an on-chain and off-chain collaborative storage model is established using the Ethereum blockchain and the InterPlanetary File System (IPFS) to ensure secure and large-scale storage of student work data. Next, a role-based access control scheme utilizing smart contracts is proposed to effectively prevent unauthorized access. Simultaneously, a searchable encryption scheme is designed using AES-CBC-256 and SHA-256 algorithms, enabling data sharing while safeguarding data privacy. Additionally, the smart contract comprehensively records students’ grade information, including weekly regular scores, midterm scores, final scores, overall scores, and their rankings, ensuring transparency in the evaluation process. Based on these technical solutions, a general-purpose teaching evaluation management system (B-Education) is developed. The experimental results demonstrate that the system accurately records teaching activities and learning outcomes, improving the transparency of teaching evaluations while ensuring data security and privacy. The system’s gas consumption remains within a reasonable range, demonstrating good flexibility and usability. Educational institutions can flexibly configure course evaluation criteria and adjust the weighting of various grades based on their specific needs. This study provides an innovative solution for blended teaching evaluation, offering significant theoretical value and practical implications. Full article
(This article belongs to the Special Issue Security and Privacy in Networks)
Show Figures

Figure 1

6 pages, 153 KiB  
Editorial
Data Privacy and Cybersecurity in Mobile Crowdsensing
by Chuan Zhang, Tong Wu and Weiting Zhang
Electronics 2025, 14(5), 1038; https://doi.org/10.3390/electronics14051038 - 5 Mar 2025
Cited by 1 | Viewed by 544
Abstract
Mobile crowdsensing (MCS) has emerged as a pivotal element in contemporary communication technology, witnessing substantial growth recently [...] Full article
(This article belongs to the Special Issue Data Privacy and Cybersecurity in Mobile Crowdsensing)
13 pages, 3529 KiB  
Article
An Online Equivalent Series Resistance Estimation Method for Output Capacitor of Buck Converter Based on Inductor Current Ripple Fitting
by Lei Ren, Jiacheng Li and Mengyao Jiang
Electronics 2025, 14(5), 1037; https://doi.org/10.3390/electronics14051037 - 5 Mar 2025
Viewed by 486
Abstract
A Buck converter in the DC microgrid is often used to transform high DC voltage to meet the requirements of low voltage loads, where electrolytic capacitors are commonly regarded as the most vulnerable components. A lot of studies have shown that equivalent series [...] Read more.
A Buck converter in the DC microgrid is often used to transform high DC voltage to meet the requirements of low voltage loads, where electrolytic capacitors are commonly regarded as the most vulnerable components. A lot of studies have shown that equivalent series resistance (ESR) is the best health indicator for electrolytic capacitors, which means that it is significant to monitor the variation in ESR values for health evaluation. This paper presents a non-intrusive online ESR estimation method of the output capacitor for a Buck converter based on inductor current ripple fitting. In this method, only output voltage is sampled and inductor/capacitor current ripple is fitted by use of the characteristics of output voltage ripple. ESR calculation is implemented based on the orthogonality of the voltage ripple and the fitted current ripple, which has high-precision and anti-noise characteristics. Compared to existing methods, the proposed scheme does not require additional current sensors or high-precision trigger sampling devices, making it a cost-effective solution. Based on the proposed scheme, accurate ESR estimation is achieved for both continuous conduction mode (CCM) and discontinuous conduction mode (DCM). An experimental ESR monitoring system platform is built and experimental estimation results are provided to verify the effectiveness and the precision. Full article
Show Figures

Figure 1

21 pages, 4292 KiB  
Article
A Deep-Reinforcement-Learning-Based Multi-Source Information Fusion Portfolio Management Approach via Sector Rotation
by Yuxiao Yan, Changsheng Zhang, Yang An and Bin Zhang
Electronics 2025, 14(5), 1036; https://doi.org/10.3390/electronics14051036 - 5 Mar 2025
Viewed by 671
Abstract
As a research objective in quantitative trading, the aim of portfolio management is to find the optimal allocation of funds by following the dynamic changes in stock prices. The principal issue with current portfolio management methods is their narrow focus on a single [...] Read more.
As a research objective in quantitative trading, the aim of portfolio management is to find the optimal allocation of funds by following the dynamic changes in stock prices. The principal issue with current portfolio management methods is their narrow focus on a single data source, neglecting the changes or news arising from sectors. Methods for integrating news data frequently face challenges with regard to quantifying text data and embedding them into portfolio models; this process often necessitates considerable manual labeling. To address these issues, we proposed a sector rotation portfolio management approach based on deep reinforcement learning (DRL) via multi-source information. The multi-source information includes the temporal data of sector and stock features, as well as news data. In terms of structure, in this method, a dual-layer reinforcement learning structure is deployed, comprising a multi-agent sector layer and a graph convolution layer. The former learns the trend of sectors, while the latter learns the connections between stocks in sectors, and the impact of news on sectors is integrated through large language models without manual labeling or fusing output information of other modules to provide the final portfolio management scheme. The results of simulation experiments on the Chinese and US (United States) stock markets show that our method demonstrates significant improvements over multiple state-of-the-art approaches. Full article
(This article belongs to the Special Issue AI and Machine Learning in Recommender Systems and Customer Behavior)
Show Figures

Figure 1

25 pages, 8084 KiB  
Article
Efficient Optimization Method of the Meshed Return Plane Through Fusion of Convolutional Neural Network and Improved Particle Swarm Optimization
by Jingling Mei, Haiyue Yuan, Xiuqin Chu and Lei Ding
Electronics 2025, 14(5), 1035; https://doi.org/10.3390/electronics14051035 - 5 Mar 2025
Viewed by 606
Abstract
Reducing distortion of spectral simulation signals in infrared detection systems is essential to improve the precision of detecting fine spectra in space-based carbon monitoring satellites. The rigid-flex printed circuit board (PCB), a vital interconnection structure between detectors and signal conditioning circuits, exhibits signal [...] Read more.
Reducing distortion of spectral simulation signals in infrared detection systems is essential to improve the precision of detecting fine spectra in space-based carbon monitoring satellites. The rigid-flex printed circuit board (PCB), a vital interconnection structure between detectors and signal conditioning circuits, exhibits signal quality variations due to impedance fluctuations and parasitic capacitance changes induced by its meshed return plane geometry. This periodically varying structure necessitates full-wave field solutions to include longitudinal discontinuity. Although full-wave simulations provide accurate characterization, they demand substantial computational resources and time. To address these challenges, we propose an innovative approach to effectively determine optimal meshed return plane designs across various transmission rates. The method integrates a convolutional neural network (CNN) with improved particle swarm optimization (IPSO). First, a CNN model is employed efficiently to predict scattering parameters (S-parameters) for different design configurations, thereby overcoming the inefficiencies associated with iterative full-wave simulation optimization. Then, an IPSO algorithm has been implemented to address the optimization challenge of crosstalk and inter-symbol interference (ISI) in signal transmission. Furthermore, to increase the optimization speed and evaluate the system performance under extreme conditions, we propose a fitness function construction method based on double-edge responses (DER) to rapidly generate a worst-case peak distortion analysis (PDA) eye diagram within the IPSO algorithm. The proposed methodology reduces computational complexity by two orders of magnitude relative to the full-wave simulation. Quantitative analysis conducted at a transmission rate of 5 Gbps demonstrates substantial signal quality improvements compared to empirical PCB design: the eye height increased by 49.7%, and the eye width expanded by 35.7%. The effectiveness of these improvements has been verified through commercial simulation software, proving that the method can provide design support for infrared detection systems. Full article
Show Figures

Figure 1

30 pages, 1455 KiB  
Article
Automated Formative Feedback for Algorithm and Data Structure Self-Assessment
by Lourdes Araujo, Fernando Lopez-Ostenero, Laura Plaza and Juan Martinez-Romo
Electronics 2025, 14(5), 1034; https://doi.org/10.3390/electronics14051034 - 5 Mar 2025
Viewed by 571
Abstract
Self-evaluation empowers students to progress independently and adapt their pace according to their unique circumstances. A critical facet of self-assessment and personalized learning lies in furnishing learners with formative feedback. This feedback, dispensed following their responses to self-assessment questions, constitutes a pivotal component [...] Read more.
Self-evaluation empowers students to progress independently and adapt their pace according to their unique circumstances. A critical facet of self-assessment and personalized learning lies in furnishing learners with formative feedback. This feedback, dispensed following their responses to self-assessment questions, constitutes a pivotal component of formative assessment systems. We hypothesize that it is possible to generate explanations that are useful as formative feedback using different techniques depending on the type of self-assessment question under consideration. This study focuses on a subject taught in a computer science program at a Spanish distance learning university. Specifically, it delves into advanced data structures and algorithmic frameworks, which serve as overarching principles for addressing complex problems. The generation of these explanatory resources hinges on the specific nature of the question at hand, whether theoretical, practical, related to computational cost, or focused on selecting optimal algorithmic approaches. Our work encompasses a thorough analysis of each question type, coupled with tailored solutions for each scenario. To automate this process as much as possible, we leverage natural language processing techniques, incorporating advanced methods of semantic similarity. The results of the assessment of the feedback generated for a subset of theoretical questions validate the effectiveness of the proposed methods, allowing us to seamlessly integrate this feedback into the self-assessment system. According to a survey, students found the resulting tool highly useful. Full article
Show Figures

Figure 1

17 pages, 2527 KiB  
Article
Three-Stage Multi-Frame Multi-Channel In-Loop Filter of VVC
by Si Li, Honggang Qi, Yundong Zhang and Guoqin Cui
Electronics 2025, 14(5), 1033; https://doi.org/10.3390/electronics14051033 - 5 Mar 2025
Viewed by 415
Abstract
For the Versatile Video Coding (VVC) standard, extensive research has been conducted on in-loop filtering to improve encoding efficiency. However, most methods use only spatial characteristics without exploiting the content correlation across multiple frames or fully utilizing the inter-channel relational information. In this [...] Read more.
For the Versatile Video Coding (VVC) standard, extensive research has been conducted on in-loop filtering to improve encoding efficiency. However, most methods use only spatial characteristics without exploiting the content correlation across multiple frames or fully utilizing the inter-channel relational information. In this paper, we introduce a novel three-stage Multi-frame Multi-channel In-loop Filtering (3-MMIF) method for VVC that improves the quality of each encoded frame by harnessing the correlations between adjacent frames and channels. Firstly, we establish a comprehensive database containing pairs of encoded and original frames across various scenes. Then, we select the nearest frames in the decode buffer as the reference frames for enhancing the quality of the current frame. Subsequently, we propose a three-stage in-loop filtering method that leverages spatio-temporal and inter-channel correlations. The three-stage method is grounded in the recently developed Residual Dense Network, benefiting from its enhanced generalization ability and feature reuse mechanism. Experimental results demonstrate that our 3-MMIF method, with the encoder’s standard filter tools activated, achieves 2.78%/4.87%/5.13% Bjøntegaard delta bit-rate (BD-Rate) reductions for the Y, U, and V channels over the VVC 17.0 codec for random access configuration on the standard test set, outperforming other VVC in-loop filter methods. Full article
Show Figures

Figure 1

17 pages, 432 KiB  
Article
Efficient Modeling and Usage of Scratchpad Memory for Artificial Intelligence Accelerators
by Cagla Irmak Rumelili Köksal and Sıddıka Berna Örs Yalçın
Electronics 2025, 14(5), 1032; https://doi.org/10.3390/electronics14051032 - 5 Mar 2025
Viewed by 676
Abstract
Deep learning accelerators play a crucial role in enhancing computation-intensive AI applications. Optimizing system resources—such as shared caches, on-chip SRAM, and data movement mechanisms—is essential for achieving peak performance and energy efficiency. This paper explores the trade-off between last-level cache (LLC) and scratchpad [...] Read more.
Deep learning accelerators play a crucial role in enhancing computation-intensive AI applications. Optimizing system resources—such as shared caches, on-chip SRAM, and data movement mechanisms—is essential for achieving peak performance and energy efficiency. This paper explores the trade-off between last-level cache (LLC) and scratchpad memory (SPM) usage in accelerator-based SoCs. To evaluate this trade-off, we introduce a high-speed simulator for estimating the timing performance of complex SoCs and demonstrate the benefits of SPM utilization. Our work shows that dynamic reconfiguration of the LLC into an SPM with prefetching capabilities reduces cache misses while improving resource utilization, performance, and energy efficiency. With SPM usage, we achieve up to 13× speedup and a 10% reduction in energy consumption for CNN backbones. Additionally, our simulator significantly outperforms state-of-the-art alternatives, running 3000× faster than gem5-SALAM for fixed-weight convolution computations and up to 64,000× faster as weight size increases. These results validate the effectiveness of both the proposed architecture and simulator in optimizing deep learning workloads. Full article
(This article belongs to the Special Issue Recent Advances in AI Hardware Design)
Show Figures

Figure 1

15 pages, 968 KiB  
Article
A Radical-Based Token Representation Method for Enhancing Chinese Pre-Trained Language Models
by Honglun Qin, Meiwen Li, Lin Wang, Youming Ge, Junlong Zhu and Ruijuan Zheng
Electronics 2025, 14(5), 1031; https://doi.org/10.3390/electronics14051031 - 5 Mar 2025
Viewed by 606
Abstract
In the domain of natural language processing (NLP), a primary challenge pertains to the process of Chinese tokenization, which remains challenging due to the lack of explicit word boundaries in written Chinese. The existing tokenization methods often treat each Chinese character as an [...] Read more.
In the domain of natural language processing (NLP), a primary challenge pertains to the process of Chinese tokenization, which remains challenging due to the lack of explicit word boundaries in written Chinese. The existing tokenization methods often treat each Chinese character as an indivisible unit, neglecting the finer semantic features embedded in the characters, such as radicals. To tackle this issue, we propose a novel token representation method that integrates radical-based features into the process. The proposed method extends the vocabulary to include both radicals and original character tokens, enabling a more granular understanding of Chinese text. We also conduct experiments on seven datasets covering multiple Chinese natural language processing tasks. The results show that our method significantly improves model performance on downstream tasks. Specifically, the accuracy of BERT on the BQ Croups dataset was enhanced to 86.95%, showing an improvement of 1.65% over the baseline. Additionally, the BERT-wwm performance demonstrated a 1.28% enhancement, suggesting that the incorporation of fine-grained radical features offers a more efficacious solution for Chinese tokenization and paves the way for future research in Chinese text processing. Full article
Show Figures

Figure 1

19 pages, 7225 KiB  
Article
Utilization of MCU and Real-Time Simulator for Identifying Beatless Control for Six-Step Operation of Three-Phase Inverter
by Yongsu Han
Electronics 2025, 14(5), 1030; https://doi.org/10.3390/electronics14051030 - 5 Mar 2025
Viewed by 451
Abstract
In industries dealing with motor drive systems, the use of real-time simulators for validating control codes is becoming increasingly mandatory. This is particularly essential for systems with advanced control codes or complex microcontroller unit (MCU) register configurations, as this validation process helps prevent [...] Read more.
In industries dealing with motor drive systems, the use of real-time simulators for validating control codes is becoming increasingly mandatory. This is particularly essential for systems with advanced control codes or complex microcontroller unit (MCU) register configurations, as this validation process helps prevent accidents and shorten development time. This study presents a validation process using a real-time simulator for the beatless control of six-step operation. Six-step operation, when applied to high-speed drives, has a limitation on the number of samples per electrical rotation, which causes voltage errors. A representative of these voltage error phenomena is the beat phenomenon, resulting in torque ripple at the first harmonic and high current ripple. To mitigate this beat phenomenon, a synchronous PWM method is sometimes used. However, in practical industrial systems, it may not be feasible to synchronously adjust the inverter’s switching frequency with the rotation speed. This study proposes a beatless control method to eliminate the voltage errors caused by the beat phenomenon during six-step operation at a fixed switching frequency. The specific implementation of this control method is explained based on MCU timer register settings. While previous studies have only proposed beatless control methods, this paper goes further by implementing the proposed beatless method using the MCU (TMS320F28335) to generate gating signals and validating the implementation through simulation on a permanent magnet synchronous motor using a real-time simulator (Typhoon HIL). Full article
Show Figures

Figure 1

15 pages, 9988 KiB  
Article
Geometry-Aware 3D Hand–Object Pose Estimation Under Occlusion via Hierarchical Feature Decoupling
by Yuting Cai, Huimin Pan, Jiayi Yang, Yichen Liu, Quanli Gao and Xihan Wang
Electronics 2025, 14(5), 1029; https://doi.org/10.3390/electronics14051029 - 5 Mar 2025
Viewed by 568
Abstract
Hand–object occlusion poses a significant challenge in 3D pose estimation. During hand–object interactions, parts of the hand or object are frequently occluded by the other, making it difficult to extract discriminative features for accurate pose estimation. Traditional methods typically extract features for both [...] Read more.
Hand–object occlusion poses a significant challenge in 3D pose estimation. During hand–object interactions, parts of the hand or object are frequently occluded by the other, making it difficult to extract discriminative features for accurate pose estimation. Traditional methods typically extract features for both the hand and object from a single image using a shared backbone network. However, this approach often results in feature contamination, where hand and object features are mixed, especially in occluded regions. To address these issues, we propose a novel 3D hand–object pose estimation framework that explicitly tackles the problem of occlusion through two key innovations. While existing methods rely on a single backbone for feature extraction, our framework introduces a feature decoupling strategy that shares low-level features (using ResNet-50) to capture interaction contexts, while separating high-level features into two independent branches. This design ensures that hand-specific features and object-specific features are processed separately, reducing feature contamination and improving pose estimation accuracy under occlusion. Recognizing the correlation between the hand’s occluded regions and the object’s geometry, we introduce the Hand–Object Cross-Attention Transformer (HOCAT) module. Unlike traditional attention mechanisms that focus solely on feature correlations, the HOCAT leverages the geometric stability of the object as prior knowledge to guide the reconstruction of occluded hand regions. Specifically, the object features (key/value) provide contextual information to enhance the hand features (query), enabling the model to infer the positions of occluded hand joints based on the object’s known structure. This approach significantly improves the model’s ability to handle complex occlusion scenarios. The experimental results demonstrate that our method achieves significant improvements in hand–object pose estimation tasks on publicly available datasets such as HO3D V2 and Dex-YCB. On the HO3D V2 dataset, the PAMPJPE reaches 9.1 mm, the PAMPVPE is 9.0 mm, and the F-score reaches 95.8%. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision Application)
Show Figures

Figure 1

17 pages, 1221 KiB  
Article
Advanced Prompt Engineering in Emergency Medicine and Anesthesia: Enhancing Simulation-Based e-Learning
by Charlotte Meynhardt, Patrick Meybohm, Peter Kranke and Carlos Ramon Hölzing
Electronics 2025, 14(5), 1028; https://doi.org/10.3390/electronics14051028 - 5 Mar 2025
Viewed by 768
Abstract
Medical education is rapidly evolving with the integration of artificial intelligence (AI), particularly through the application of generative AI to create dynamic learning environments. This paper examines the transformative role of prompt engineering in enhancing simulation-based learning in emergency medicine. By enabling the [...] Read more.
Medical education is rapidly evolving with the integration of artificial intelligence (AI), particularly through the application of generative AI to create dynamic learning environments. This paper examines the transformative role of prompt engineering in enhancing simulation-based learning in emergency medicine. By enabling the generation of realistic, context-specific clinical case scenarios, prompt engineering fosters critical thinking and decision-making skills among medical trainees. To guide systematic implementation, we introduce the PROMPT+ Framework, a structured methodology for designing, evaluating, and refining prompts in AI-driven simulations, while incorporating essential ethical considerations. Furthermore, we emphasize the importance of developing specialized AI models tailored to regional guidelines, standard operating procedures, and educational contexts to ensure relevance and alignment with current standards and practices. The framework aims to provide a structured approach for engaging with AI-generated medical content, allowing learners to reflect on clinical reasoning, critically assess AI-generated recommendations, and consider the potential role of AI tools in medical training workflows. Additionally, we acknowledge certain challenges associated with the use of AI in education, such as maintaining reliability and addressing potential biases in AI outputs. Our study explores how AI-driven simulations could contribute to scalability and adaptability in medical education, potentially offering structured methods for healthcare professionals to engage with generative AI in training contexts. Full article
(This article belongs to the Special Issue Techniques and Applications in Prompt Engineering and Generative AI)
Show Figures

Figure 1

15 pages, 346 KiB  
Article
Application of Quantum Computers and Their Unique Properties for Constrained Optimization in Engineering Problems: Welded Beam Design
by Dawid Ewald
Electronics 2025, 14(5), 1027; https://doi.org/10.3390/electronics14051027 - 4 Mar 2025
Viewed by 604
Abstract
The welded beam design problem represents a real-world engineering challenge in structural optimization. The objective is to determine the optimal dimensions of a steel beam and weld length to minimize cost while satisfying constraints related to shear stress (τ), bending stress [...] Read more.
The welded beam design problem represents a real-world engineering challenge in structural optimization. The objective is to determine the optimal dimensions of a steel beam and weld length to minimize cost while satisfying constraints related to shear stress (τ), bending stress (σ), critical buckling load (Pc), end deflection (δ), and side constraints. The structural analysis of this problem involves the following four design variables: weld height (x1), weld length (x2), beam thickness (x3), and beam width (x4), which are commonly denoted in structural engineering as h,l,t,b respectively. The structural formulation of this problem leads to a nonlinear objective function, which is subject to five nonlinear and two linear inequality constraints. The optimal solution lies on the boundary of the feasible region, with a very small feasible-to-search-space ratio, making it a highly challenging problem for classical optimization algorithms. This paper explores the application of quantum computing to solve the welded beam optimization problem, utilizing the unique properties of quantum computers for constrained optimization in engineering problems. Specifically, we employ the D-Wave quantum computing system, which utilizes quantum annealing and is particularly well-suited for solving constrained optimization problems. The study presents a detailed formulation of the problem in a format compatible with the D-Wave system, ensuring the efficient encoding of constraints and objective functions. Furthermore, we analyze the performance of quantum computing in solving this problem and compare the obtained results with classical optimization methods. The effectiveness of quantum computing is evaluated in terms of computational efficiency, accuracy, and its ability to navigate complex, constrained search spaces. This research highlights the potential of quantum algorithms in tackling real-world engineering optimization problems and discusses the challenges and limitations of current quantum hardware in solving practical industrial application issues. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop