Next Issue
Volume 13, February-2
Previous Issue
Volume 13, January-2
 
 

Electronics, Volume 13, Issue 3 (February-1 2024) – 204 articles

Cover Story (view full-size image): Natural disasters, such as earthquakes, involve critical situations that threaten human lives. Victims are often injured and trapped, while in most cases, the harsh environment prevents search and rescue (SAR) teams from approaching the victims’ locations. In such scenarios, decision-makers must have a real-time and complete view of the current situation. Advances in robotics, drones, Edge computing and Internet of Things (IoT) have increased their usability in life- and time-critical decision support systems, integrating heterogeneous streaming data generated from IoT entities, and providing reasoning capabilities, in real-time. This paper reviews related technologies and approaches and identifies open issues and challenges, proposing a novel approach that goes beyond the state of the art. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
12 pages, 1016 KiB  
Article
Secure Healthcare Model Using Multi-Step Deep Q Learning Network in Internet of Things
by Patibandla Pavithra Roy, Ventrapragada Teju, Srinivasa Rao Kandula, Kambhampati Venkata Sowmya, Anca Ioana Stan and Ovidiu Petru Stan
Electronics 2024, 13(3), 669; https://doi.org/10.3390/electronics13030669 - 5 Feb 2024
Viewed by 605
Abstract
Internet of Things (IoT) is an emerging networking technology that connects both living and non-living objects globally. In an era where IoT is increasingly integrated into various industries, including healthcare, it plays a pivotal role in simplifying the process of monitoring and identifying [...] Read more.
Internet of Things (IoT) is an emerging networking technology that connects both living and non-living objects globally. In an era where IoT is increasingly integrated into various industries, including healthcare, it plays a pivotal role in simplifying the process of monitoring and identifying diseases for patients and healthcare professionals. In IoT-based systems, safeguarding healthcare data is of the utmost importance, to prevent unauthorized access and intermediary assaults. The motivation for this research lies in addressing the growing security concerns within healthcare IoT. In this proposed paper, we combine the Multi-Step Deep Q Learning Network (MSDQN) with the Deep Learning Network (DLN) to enhance the privacy and security of healthcare data. The DLN is employed in the authentication process to identify authenticated IoT devices and prevent intermediate attacks between them. The MSDQN, on the other hand, is harnessed to detect and counteract malware attacks and Distributed Denial of Service (DDoS) attacks during data transmission between various locations. Our proposed method’s performance is assessed based on such parameters as energy consumption, throughput, lifetime, accuracy, and Mean Square Error (MSE). Further, we have compared the effectiveness of our approach with an existing method, specifically, Learning-based Deep Q Network (LDQN). Full article
Show Figures

Figure 1

20 pages, 6579 KiB  
Article
Scalable Hardware-Efficient Architecture for Frame Synchronization in High-Data-Rate Satellite Receivers
by Luca Crocetti, Emanuele Pagani, Matteo Bertolucci and Luca Fanucci
Electronics 2024, 13(3), 668; https://doi.org/10.3390/electronics13030668 - 5 Feb 2024
Viewed by 633
Abstract
The continuous technical advancement of scientific space missions has resulted in a surge in the amount of data that is transferred to ground stations within short satellite visibility windows, which has consequently led to higher throughput requirements for the hardware involved. To aid [...] Read more.
The continuous technical advancement of scientific space missions has resulted in a surge in the amount of data that is transferred to ground stations within short satellite visibility windows, which has consequently led to higher throughput requirements for the hardware involved. To aid synchronization algorithms, the communication standards commonly used in such applications define a physical layer frame structure that is composed of a preamble, segments of modulation symbols, and segments of pilot symbols. Therefore, the detection of a frame start becomes an essential operation, whose accuracy is undermined by the large Doppler shift and quantization errors in hardware implementations. In this work, we present a design methodology for frame synchronization modules that are robust against large frequency offsets and rely on a parallel architecture to support high throughput requirements. Several algorithms are evaluated in terms of the trade-off between accuracy and resource utilization, and the best solution is exemplified through its application to the CCSDS 131.2-B-1 and CCSDS 131.21-O-1 standards. The implementation results are reported for a Xilinx KU115 FPGA, thereby showing the capability of supporting baud rates that are greater than 2 Gbaud, as well as a corresponding throughput of 15.80 Gbps. To the best of our knowledge, this paper is the first to propose a design methodology for parallel frame synchronization modules that has applicability to the CCSDS 131.2-B-1 and CCSDS 131.21-O-1 standards. Full article
Show Figures

Figure 1

22 pages, 3005 KiB  
Article
Data-Driven Distributionally Robust Optimization-Based Coordinated Dispatching for Cascaded Hydro-PV-PSH Combined System
by Shuai Zhang, Gao Qiu, Youbo Liu, Lijie Ding and Yue Shui
Electronics 2024, 13(3), 667; https://doi.org/10.3390/electronics13030667 - 5 Feb 2024
Cited by 1 | Viewed by 504
Abstract
The increasing penetration of photovoltaic (PV) and hydroelectric power generation and their coupling uncertainties have brought great challenges to multi-energy’s coordinated dispatch. Traditional methods such as stochastic optimization (SO) and robust optimization (RO) are not feasible due to the unavailability of accurate probability [...] Read more.
The increasing penetration of photovoltaic (PV) and hydroelectric power generation and their coupling uncertainties have brought great challenges to multi-energy’s coordinated dispatch. Traditional methods such as stochastic optimization (SO) and robust optimization (RO) are not feasible due to the unavailability of accurate probability density function (PDF) and over-conservative decisions. This limits the operational efficiency of the clean energies in cascaded hydropower and PV-enriched areas. Based on data-driven distributionally robust optimization (DRO) theory, this paper tailors a joint optimization dispatching method for a cascaded hydro-PV-pumped storage combined system. Firstly, a two-step model for a Distributed Renewable Optimization (DRO) dispatch is developed to create the daily dispatch plan, taking into account the system’s complementary economic dispatch cost. Furthermore, the inclusion of a complementary norm constraint is implemented to restrict the confidence set of the probability distribution. This aims to identify the optimal adjustment scheme for the day-ahead dispatch schedule, considering the adjustment cost associated with real-time operations under the most unfavorable distribution conditions. Utilizing the MPSP framework, the Column and Constraint Generation (CCG) algorithm is employed to resolve the two-stage dispatch model. The optimal dispatch schedule is then produced by integrating the daily dispatch plan with the adjustive dispatch scheme. Finally, the numerical dispatch results obtained from an actual demonstration area substantiate the effectiveness and efficiency of the proposed methodology. Full article
Show Figures

Figure 1

14 pages, 5443 KiB  
Article
Charge-Domain Static Random Access Memory-Based In-Memory Computing with Low-Cost Multiply-and-Accumulate Operation and Energy-Efficient 7-Bit Hybrid Analog-to-Digital Converter
by Sanghyun Lee and Youngmin Kim
Electronics 2024, 13(3), 666; https://doi.org/10.3390/electronics13030666 - 5 Feb 2024
Viewed by 842
Abstract
This study presents a charge-domain SRAM-based in-memory computing (IMC) architecture. The multiply-and-accumulate (MAC) operation in the IMC structure is divided into current- and charge-domain methods. Current-domain IMC has high-power consumption and poor linearity. Charge-domain IMC has reduced variability compared with current-domain IMCs, achieving [...] Read more.
This study presents a charge-domain SRAM-based in-memory computing (IMC) architecture. The multiply-and-accumulate (MAC) operation in the IMC structure is divided into current- and charge-domain methods. Current-domain IMC has high-power consumption and poor linearity. Charge-domain IMC has reduced variability compared with current-domain IMCs, achieving higher linearity and enabling energy-efficient operation with fewer dynamic current paths. The proposed IMC structure uses a 9T1C bitcell considering the trade-off between the bitcell area and the threshold voltage drop by an NMOS access transistor. We propose an energy-efficient summation mechanism for 4-bit weight rows to perform energy-efficient MAC operations. The generated MAC value is finally returned as a digital value through an analog-to-digital converter (ADC), whose performance is one of the critical components in the overall system. The proposed flash-successive approximation register (SAR) ADC is designed by combining the advantages of flash ADC and SAR ADC and outputs digital values at approximately half the cycle of SAR ADC. The proposed charge-domain IMC is designed and simulated in a 65 nm CMOS process. It achieves 102.4 GOPS throughput and 33.6 TOPS/W energy efficiency at array size of 1 Kb. Full article
(This article belongs to the Special Issue Mixed Signal Circuit Design, Volume II)
Show Figures

Figure 1

15 pages, 8047 KiB  
Article
Res-BiANet: A Hybrid Deep Learning Model for Arrhythmia Detection Based on PPG Signal
by Yankun Wu, Qunfeng Tang, Weizong Zhan, Shiyong Li and Zhencheng Chen
Electronics 2024, 13(3), 665; https://doi.org/10.3390/electronics13030665 - 5 Feb 2024
Viewed by 694
Abstract
Arrhythmias are among the most prevalent cardiac conditions and frequently serve as a direct cause of sudden cardiac death. Hence, the automated detection of arrhythmias holds significant importance for assisting in the diagnosis of heart conditions. Recently, the photoplethysmography (PPG) signal, capable of [...] Read more.
Arrhythmias are among the most prevalent cardiac conditions and frequently serve as a direct cause of sudden cardiac death. Hence, the automated detection of arrhythmias holds significant importance for assisting in the diagnosis of heart conditions. Recently, the photoplethysmography (PPG) signal, capable of conveying heartbeat information, has found application in the field of arrhythmia detection research. This work proposes a hybrid deep learning model, Res-BiANet, designed for the detection and classification of multiple types of arrhythmias. The improved ResNet and BiLSTM models are connected in parallel, and spatial and temporal features of PPG signals are extracted using ResNet and BiLSTM, respectively. Subsequent to BiLSTM, a multi-head self-attention mechanism was incorporated to enhance the extraction of global temporal correlation features over long distances. The model classifies five types of arrhythmia rhythms (premature ventricular contractions, premature atrial contractions, ventricular tachycardia, supraventricular tachycardia, and atrial fibrillation) and normal rhythm (sinus rhythm). Based on this foundation, experiments were conducted utilizing publicly accessible datasets, encompassing a total of 46,827 PPG signal fragments from 91 patients with arrhythmias. The experimental results demonstrate that Res-BiANet achieved exceptional classification performance, including an F1 score of 86.88%, overall accuracy of 92.38%, and precision, sensitivity, and specificity of 88.46%, 85.15%, and 98.43%, respectively. The outstanding performance of the Res-BiANet model suggests significant potential in supporting the auxiliary diagnosis of multiple types of arrhythmias. Full article
Show Figures

Figure 1

18 pages, 2473 KiB  
Article
A Test System for Transmission Expansion Planning Studies
by Bhuban Dhamala and Mona Ghassemi
Electronics 2024, 13(3), 664; https://doi.org/10.3390/electronics13030664 - 5 Feb 2024
Viewed by 609
Abstract
This paper introduces a 17-bus 500 kV test system intended for transmission expansion planning (TEP) studies. The overhead lines used in the system are based on an actual 500 kV transmission line geometry. Although several test systems have been developed for various forms [...] Read more.
This paper introduces a 17-bus 500 kV test system intended for transmission expansion planning (TEP) studies. The overhead lines used in the system are based on an actual 500 kV transmission line geometry. Although several test systems have been developed for various forms of power system analysis, few are specifically tailored for TEP studies at the transmission voltage level, as opposed to the distribution voltage level. Current test systems for TEP studies are limited to single loading conditions only for normal operating conditions, and the majority of these systems are intertwined with issues related to the energy market or devised specifically for integrating new generations and loads into the existing power systems. However, ensuring a test system satisfies both voltage drop and line loading criteria during both normal and all single contingency operations is crucial in TEP studies, and addressing these issues under contingency conditions poses notable challenges. Moreover, practical TEP scenarios involve varied loadings, including peak load and dominant loading (60% of peak load) scenarios, while the existing test systems are configured solely for single loading conditions. To address these technical gaps, this paper introduces the 17-bus test system operating at a transmission voltage level of 500 kV, meeting technical requirements under normal and all single contingency operations for both peak load and dominant load scenarios. Detailed specifications of the proposed test system and load flow analysis at both normal and contingency conditions for different loading conditions are presented. This test system serves as an invaluable resource for TEP studies. Full article
(This article belongs to the Special Issue Power Delivery Technologies)
Show Figures

Figure 1

18 pages, 590 KiB  
Article
Multi-Agent-Deep-Reinforcement-Learning-Enabled Offloading Scheme for Energy Minimization in Vehicle-to-Everything Communication Systems
by Wenwen Duan, Xinmin Li, Yi Huang, Hui Cao and Xiaoqiang Zhang
Electronics 2024, 13(3), 663; https://doi.org/10.3390/electronics13030663 - 5 Feb 2024
Viewed by 723
Abstract
Offloading computation-intensive tasks to mobile edge computing (MEC) servers, such as road-side units (RSUs) and a base station (BS), can enhance the computation capacities of the vehicle-to-everything (V2X) communication system. In this work, we study an MEC-assisted multi-vehicle V2X communication system in which [...] Read more.
Offloading computation-intensive tasks to mobile edge computing (MEC) servers, such as road-side units (RSUs) and a base station (BS), can enhance the computation capacities of the vehicle-to-everything (V2X) communication system. In this work, we study an MEC-assisted multi-vehicle V2X communication system in which multi-antenna RSUs with liner receivers and a multi-antenna BS with a zero-forcing (ZF) receiver work as MEC servers jointly to offload the tasks of the vehicles. To control the energy consumption and ensure the delay requirement of the V2X communication system, an energy consumption minimization problem under a delay constraint is formulated. The multi-agent deep reinforcement learning (MADRL) algorithm is proposed to solve the non-convex energy optimization problem, which can train vehicles to select the beneficial server association, transmit power and offloading ratio intelligently according to the reward function related to the delay and energy consumption. The improved K-nearest neighbors (KNN) algorithm is proposed to assign vehicles to the specific RSU, which can reduce the action space dimensions and the complexity of the MADRL algorithm. Numerical simulation results show that the proposed scheme can decrease energy consumption while satisfying the delay constraint. When the RSUs adopt the indirect transmission mode and are equipped with matched-filter (MF) receivers, the proposed joint optimization scheme can decrease the energy consumption by 56.90% and 65.52% compared to the maximum transmit power and full offloading schemes, respectively. When the RSUs are equipped with ZF receivers, the proposed scheme can decrease the energy consumption by 36.8% compared to the MF receivers. Full article
(This article belongs to the Special Issue Advances in Deep Learning-Based Wireless Communication Systems)
Show Figures

Figure 1

19 pages, 917 KiB  
Article
Hybrid Uncertainty Calibration for Multimodal Sentiment Analysis
by Qiuyu Pan and Zuqiang Meng
Electronics 2024, 13(3), 662; https://doi.org/10.3390/electronics13030662 - 5 Feb 2024
Viewed by 679
Abstract
In open environments, multimodal sentiment analysis (MSA) often suffers from low-quality data and can be disrupted by noise, inherent defects, and outliers. In some cases, unreasonable multimodal fusion methods can perform worse than unimodal methods. Another challenge of MSA is effectively enabling the [...] Read more.
In open environments, multimodal sentiment analysis (MSA) often suffers from low-quality data and can be disrupted by noise, inherent defects, and outliers. In some cases, unreasonable multimodal fusion methods can perform worse than unimodal methods. Another challenge of MSA is effectively enabling the model to provide accurate prediction when it is confident and to indicate high uncertainty when its prediction is likely to be inaccurate. In this paper, we propose an uncertain-aware late fusion based on hybrid uncertainty calibration (ULF-HUC). Firstly, we conduct in-depth research on the issue of sentiment polarity distribution in MSA datasets, establishing a foundation for an uncertain-aware late fusion method, which facilitates organic fusion of modalities. Then, we propose a hybrid uncertainty calibration method based on evidential deep learning (EDL) that balances accuracy and uncertainty, supporting the reduction of uncertainty in each modality of the model. Finally, we add two common types of noise to validate the effectiveness of our proposed method. We evaluate our model on three publicly available MSA datasets (MVSA-Single, MVSA-Multiple, and MVSA-Single-Small). Our method outperforms state-of-the-art approaches in terms of accuracy, weighted F1 score, and expected uncertainty calibration error (UCE) metrics, proving the effectiveness of the proposed method. Full article
Show Figures

Figure 1

16 pages, 1159 KiB  
Article
Hyperbolic-Embedding-Aided Geographic Routing in Intelligent Vehicular Networks
by Ying Pan and Na Lyu
Electronics 2024, 13(3), 661; https://doi.org/10.3390/electronics13030661 - 5 Feb 2024
Viewed by 466
Abstract
Intelligent vehicular networks can not only connect various smart terminals to manned or unmanned vehicles but also to roads and people’s hands. In order to support diverse vehicle-to-everything (V2X) applications in dynamic, intelligent vehicular networks, efficient and flexible routing is fundamental but challenging. [...] Read more.
Intelligent vehicular networks can not only connect various smart terminals to manned or unmanned vehicles but also to roads and people’s hands. In order to support diverse vehicle-to-everything (V2X) applications in dynamic, intelligent vehicular networks, efficient and flexible routing is fundamental but challenging. Aimed to eliminate routing voids in traditional Euclidean geographic greedy routing strategies, we propose a hyperbolic-embedding-aided geographic routing strategy (HGR) in this paper. By embedding the network topology into a two-dimensional Poincaré hyperbolic disk, greedy forwarding is performed according to nodes’ hyperbolic coordinates. Simulation results demonstrated that the proposed HGR strategy can greatly enhance the routing success rate through a smaller stretch of the routing paths, with little sacrifice of routing computation time. Full article
(This article belongs to the Special Issue Recent Advances in Intelligent Vehicular Networks and Communications)
Show Figures

Figure 1

20 pages, 1618 KiB  
Article
Leveraging Artificial Intelligence and Provenance Blockchain Framework to Mitigate Risks in Cloud Manufacturing in Industry 4.0
by Mifta Ahmed Umer, Elefelious Getachew Belay and Luis Borges Gouveia
Electronics 2024, 13(3), 660; https://doi.org/10.3390/electronics13030660 - 5 Feb 2024
Viewed by 1045
Abstract
Cloud manufacturing is an evolving networked framework that enables multiple manufacturers to collaborate in providing a range of services, including design, development, production, and post-sales support. The framework operates on an integrated platform encompassing a range of Industry 4.0 technologies, such as Industrial [...] Read more.
Cloud manufacturing is an evolving networked framework that enables multiple manufacturers to collaborate in providing a range of services, including design, development, production, and post-sales support. The framework operates on an integrated platform encompassing a range of Industry 4.0 technologies, such as Industrial Internet of Things (IIoT) devices, cloud computing, Internet communication, big data analytics, artificial intelligence, and blockchains. The connectivity of industrial equipment and robots to the Internet opens cloud manufacturing to the massive attack risk of cybersecurity and cyber crime threats caused by external and internal attackers. The impacts can be severe because the physical infrastructure of industries is at stake. One potential method to deter such attacks involves utilizing blockchain and artificial intelligence to track the provenance of IIoT devices. This research explores a practical approach to achieve this by gathering provenance data associated with operational constraints defined in smart contracts and identifying deviations from these constraints through predictive auditing using artificial intelligence. A software architecture comprising IIoT communications to machine learning for comparing the latest data with predictive auditing outcomes and logging appropriate risks was designed, developed, and tested. The state changes in the smart ledger of smart contracts were linked with the risks so that the blockchain peers can detect high deviations and take actions in a timely manner. The research defined the constraints related to physical boundaries and weightlifting limits allocated to three forklifts and showcased the mechanisms of detecting risks of breaking these constraints with the help of artificial intelligence. It also demonstrated state change rejections by blockchains at medium and high-risk levels. This study followed software development in Java 8 using JDK 8, CORDA blockchain framework, and Weka package for random forest machine learning. As a result of this, the model, along with its design and implementation, has the potential to enhance efficiency and productivity, foster greater trust and transparency in the manufacturing process, boost risk management, strengthen cybersecurity, and advance sustainability efforts. Full article
(This article belongs to the Special Issue Advances in IoT Security)
Show Figures

Figure 1

23 pages, 729 KiB  
Article
Semi-Supervised Feature Selection of Educational Data Mining for Student Performance Analysis
by Shanshan Yu, Yiran Cai, Baicheng Pan and Man-Fai Leung
Electronics 2024, 13(3), 659; https://doi.org/10.3390/electronics13030659 - 5 Feb 2024
Viewed by 778
Abstract
In recent years, the informatization of the educational system has caused a substantial increase in educational data. Educational data mining can assist in identifying the factors influencing students’ performance. However, two challenges have arisen in the field of educational data mining: (1) How [...] Read more.
In recent years, the informatization of the educational system has caused a substantial increase in educational data. Educational data mining can assist in identifying the factors influencing students’ performance. However, two challenges have arisen in the field of educational data mining: (1) How to handle the abundance of unlabeled data? (2) How to identify the most crucial characteristics that impact student performance? In this paper, a semi-supervised feature selection framework is proposed to analyze the factors influencing student performance. The proposed method is semi-supervised, enabling the processing of a considerable amount of unlabeled data with only a few labeled instances. Additionally, by solving a feature selection matrix, the weights of each feature can be determined, to rank their importance. Furthermore, various commonly used classifiers are employed to assess the performance of the proposed feature selection method. Extensive experiments demonstrate the superiority of the proposed semi-supervised feature selection approach. The experiments indicate that behavioral characteristics are significant for student performance, and the proposed method outperforms the state-of-the-art feature selection methods by approximately 3.9% when extracting the most important feature. Full article
Show Figures

Figure 1

17 pages, 10876 KiB  
Article
Analysis and Optimization Design Scheme of CMOS Ultra-Wideband Reconfigurable Polyphase Filters on Mismatch and Voltage Loss
by Yingze Wang, Xiaoran Li, Yuanze Wang, Xinghua Wang, Zicheng Liu, Fang Han and Quanwen Qi
Electronics 2024, 13(3), 658; https://doi.org/10.3390/electronics13030658 - 5 Feb 2024
Viewed by 551
Abstract
This manuscript presents an analysis and optimization scheme for the ultra-wideband passive reconfigurable polyphase filters (PPFs) to minimize I/Q (in-phase and quadrature-phase) phase/amplitude mismatch and voltage loss. By building a mathematical model of the voltage transfer, the relationship between the resonant frequency of [...] Read more.
This manuscript presents an analysis and optimization scheme for the ultra-wideband passive reconfigurable polyphase filters (PPFs) to minimize I/Q (in-phase and quadrature-phase) phase/amplitude mismatch and voltage loss. By building a mathematical model of the voltage transfer, the relationship between the resonant frequency of each stage and the I/Q mismatch and the relationship between the network impedance and the voltage loss are revealed, providing a scheme for PPF optimization. The proof-of-concept 2~8 GHz wideband reconfigurable PPF is designed in a 55 nm CMOS process. The optimization scheme enables the designed PPF to achieve an I/Q phase mismatch within 0.2439° and an I/Q amplitude mismatch within 0.098 dB throughout the entire band, and it shows great robustness during Monte Carlo sampling. The maximum voltage loss is 17.7 dB, and the total chip area is 0.174 × 0.145 mm2. Full article
Show Figures

Figure 1

14 pages, 3892 KiB  
Article
Flexible Approach for the Generation of Macro-Basis Functions Using the Characteristic Basis Function Method
by Eliseo García, Carlos Delgado and Felipe Cátedra
Electronics 2024, 13(3), 657; https://doi.org/10.3390/electronics13030657 - 5 Feb 2024
Viewed by 525
Abstract
This document presents a method for reducing the memory and CPU time required for the electromagnetic analysis of complex scenarios employing the Characteristic Basis Function Method. This technique introduces a novel procedure to accelerate the calculation of basis functions and the reduced matrix. [...] Read more.
This document presents a method for reducing the memory and CPU time required for the electromagnetic analysis of complex scenarios employing the Characteristic Basis Function Method. This technique introduces a novel procedure to accelerate the calculation of basis functions and the reduced matrix. The accuracy and computational cost associated with this approach are studied and validated through a number of test cases, making use of about five times less memory than with the conventional approach. Full article
(This article belongs to the Special Issue Computational Electromagnetics Applied to the Field of Antennas)
Show Figures

Figure 1

18 pages, 2769 KiB  
Article
Device Identity Recognition Based on an Adaptive Environment for Intrinsic Security Fingerprints
by Zesheng Xi, Gongxuan Zhang, Bo Zhang and Tao Zhang
Electronics 2024, 13(3), 656; https://doi.org/10.3390/electronics13030656 - 5 Feb 2024
Viewed by 705
Abstract
A device’s intrinsic security fingerprint, representing its physical characteristics, serves as a unique identifier for user devices and is highly regarded in the realms of device security and identity recognition. However, fluctuations in the environmental noise can introduce variations in the physical features [...] Read more.
A device’s intrinsic security fingerprint, representing its physical characteristics, serves as a unique identifier for user devices and is highly regarded in the realms of device security and identity recognition. However, fluctuations in the environmental noise can introduce variations in the physical features of the device. To address this issue, this paper proposes an innovative method to enable the device’s intrinsic security fingerprint to adapt to environmental changes, aiming to improve the accuracy of the device’s intrinsic security fingerprint recognition in real-world physical environments. This paper initiates continuous data collection of device features in authentic noisy environments, recording the temporal changes in the device’s physical characteristics. The problem of unstable physical features is framed as a restricted statistical learning problem with a localized information structure. This paper employs an aggregated hypergraph neural network architecture to process the temporally changing physical features. This allows the system to acquire aggregated local state information from the interactive influences of adjacent sequential signals, forming an adaptive environment-enhanced device intrinsic security fingerprint recognition model. The proposed method enhances the accuracy and reliability of device intrinsic security fingerprint recognition in outdoor environments, thereby strengthening the overall security of terminal devices. Experimental results indicate that the method achieves a recognition accuracy of 98% in continuously changing environmental conditions, representing a crucial step in reinforcing the security of Internet of Things (IoT) devices when confronted with real-world challenges. Full article
(This article belongs to the Special Issue Knowledge Information Extraction Research)
Show Figures

Figure 1

14 pages, 1877 KiB  
Article
Robust Soliton Distribution-Based Zero-Watermarking for Semi-Structured Power Data
by Lei Zhao, Yunfeng Zou, Chao Xu, Yulong Ma, Wen Shen, Qiuhong Shan, Shuai Jiang, Yue Yu, Yihan Cai, Yubo Song and Yu Jiang
Electronics 2024, 13(3), 655; https://doi.org/10.3390/electronics13030655 - 4 Feb 2024
Viewed by 631
Abstract
To ensure the security of online-shared power data, this paper adopts a robust soliton distribution-based zero-watermarking approach for tracing semi-structured power data. The method involves extracting partial key-value pairs to generate a feature sequence, processing the watermark into an equivalent number of blocks. [...] Read more.
To ensure the security of online-shared power data, this paper adopts a robust soliton distribution-based zero-watermarking approach for tracing semi-structured power data. The method involves extracting partial key-value pairs to generate a feature sequence, processing the watermark into an equivalent number of blocks. Robust soliton distribution from erasure codes and redundant error correction codes is utilized to generate an intermediate sequence. Subsequently, the error-corrected watermark information is embedded into the feature sequence, creating a zero-watermark for semi-structured power data. In the tracking process, the extraction and analysis of the robust zero-watermark associated with the tracked data facilitate the effective identification and localization of data anomalies. Experimental and simulation validation demonstrates that this method, while ensuring data security, achieves a zero-watermark extraction success rate exceeding 98%. The proposed approach holds significant application value for data monitoring and anomaly tracking in power systems. Full article
(This article belongs to the Special Issue Knowledge Information Extraction Research)
Show Figures

Figure 1

13 pages, 3835 KiB  
Article
Two-Stage Point Cloud Registration Framework Based on Graph Neural Network and Attention
by Xiaoqian Zhang, Junlin Li, Wei Zhang, Yansong Xu and Feng Li
Electronics 2024, 13(3), 654; https://doi.org/10.3390/electronics13030654 - 4 Feb 2024
Viewed by 764
Abstract
In recent years, due to the wide application of 3D vision in the fields of autonomous driving, robot navigation, and the protection of cultural heritage, 3D point cloud registration has received much attention. However, most current methods are time-consuming and are very sensitive [...] Read more.
In recent years, due to the wide application of 3D vision in the fields of autonomous driving, robot navigation, and the protection of cultural heritage, 3D point cloud registration has received much attention. However, most current methods are time-consuming and are very sensitive to noises and outliers, resulting in low registration accuracy. Therefore, we propose a two-stage framework based on graph neural network and attention—TSGANet, which is effective in registering low-overlapping point cloud pairs and is robust to variable noises as well as outliers. Our method decomposes rigid transformation estimation into two stages: global estimation and fine-tuning. At the global estimation stage, multilayer perceptrons are employed to estimate a seven-dimensional vector representing rigid transformation directly from the fusion of two initial point cloud features. For the fine-tuning stage, we extract contextual information through an attentional graph neural network consisting of attention and feature-enhancing modules. A mismatch-suppression mechanism is also proposed and applied to keep our method robust to partially visible data with noises and outliers. Experiments show that our method yields a state-of-the-art performance on the ModelNet40 dataset. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

13 pages, 7819 KiB  
Article
Asymmetric GaN High Electron Mobility Transistors Design with InAlN Barrier at Source Side and AlGaN Barrier at Drain Side
by Beibei Lv, Lixing Zhang and Jiongjiong Mo
Electronics 2024, 13(3), 653; https://doi.org/10.3390/electronics13030653 - 4 Feb 2024
Viewed by 775
Abstract
The InAlN/GaN HEMT has been identified as a promising alternative to conventional AlGaN/GaN HEMT due to its enhanced polarization effect contributing to higher 2DEG in the GaN channel. However, the InAlN barrier usually suffers from high leakage and therefore low breakdown voltage. In [...] Read more.
The InAlN/GaN HEMT has been identified as a promising alternative to conventional AlGaN/GaN HEMT due to its enhanced polarization effect contributing to higher 2DEG in the GaN channel. However, the InAlN barrier usually suffers from high leakage and therefore low breakdown voltage. In this paper, we propose an asymmetrical GaN HEMT structure which is composed of an InAlN barrier at the source side and an AlGaN barrier at the drain side. This novel device combines the advantages of high 2DEG density at the source side and low electrical-field crowding at the drain side. According to the TCAD simulation, the proposed asymmetric device exhibits better drain current and transconductance compared to AlGaN/GaN HEMT, and enhanced breakdown voltage compared to InAlN/GaN HEMT. The current collapse effects have also been evaluated from the process-related point of view. Possible higher interface traps related to the two-step epitaxial growth for the asymmetric structure fabrication will not exacerbate the current collapse and reliability. Full article
Show Figures

Figure 1

16 pages, 2629 KiB  
Article
Target Tracking Algorithm Based on Adaptive Strong Tracking Extended Kalman Filter
by Feng Tian, Xinzhao Guo and Weibo Fu
Electronics 2024, 13(3), 652; https://doi.org/10.3390/electronics13030652 - 4 Feb 2024
Viewed by 725
Abstract
Kalman filtering is a common filtering method for millimeter-wave traffic radars. The proposal is for an Adaptive Strong Tracking Extended Kalman Filter (EKF) algorithm that aims to address the issues of classic EKF’s low accuracy and lengthy convergence time. This method, which incorporates [...] Read more.
Kalman filtering is a common filtering method for millimeter-wave traffic radars. The proposal is for an Adaptive Strong Tracking Extended Kalman Filter (EKF) algorithm that aims to address the issues of classic EKF’s low accuracy and lengthy convergence time. This method, which incorporates time-varying fading effects into the covariance matrix of the traditional EKF, is based on the ST algorithm. It allows the recalibration of the covariance matrix and precise filtering and state estimation of the target vehicle. By altering the fading and attenuating factors of the ST algorithm and using orthogonality principles, many fine-tuned fading factors produced from least-squares optimization are introduced together with regionally optimum attenuation factors. The results of Monte Carlo experiments indicate that the average velocity inaccuracy is reduced by at least 38% in comparison to existing counterparts. The results validate the efficacy of this methodology in observing vehicular movements in metropolitan regions, satisfying the prerequisites of millimeter-wave radar technology for traffic monitoring. Full article
Show Figures

Figure 1

17 pages, 2757 KiB  
Article
Research on an Enhanced Multimodal Network for Specific Emitter Identification
by Heli Peng, Kai Xie and Wenxu Zou
Electronics 2024, 13(3), 651; https://doi.org/10.3390/electronics13030651 - 4 Feb 2024
Viewed by 568
Abstract
Specific emitter identification (SEI) refers to the task of distinguishing similar emitters, especially those of the same type and transmission parameters, which is one of the most critical tasks of electronic warfare. However, SEI is still a challenging task when a feature has [...] Read more.
Specific emitter identification (SEI) refers to the task of distinguishing similar emitters, especially those of the same type and transmission parameters, which is one of the most critical tasks of electronic warfare. However, SEI is still a challenging task when a feature has low physical representation. Feature representation largely determines the recognition results. Therefore, this article expects to move toward robust feature representation for SEI. Efficient multimodal strategies have great potential for applications using multimodal data and can further improve the performance of SEI. In this research, we introduce a multimodal emitter identification method that explores the application of multimodal data, time-series radar signals, and feature vector data to an enhanced transformer, which employs a conformer block to embed the raw data and integrates an efficient multimodal feature representation module. Moreover, we employ self-knowledge distillation to mitigate overconfident predictions and reduce intra-class variations. Our study reveals that multimodal data provide sufficient information for specific emitter identification. Simultaneously, we propose the CV-CutMixOut method to augment the time-domain signal. Extensive experiments on real radar datasets indicate that the proposed method achieves more accurate identification results and higher feature discriminability. Full article
Show Figures

Figure 1

0 pages, 680 KiB  
Article
Score-Based Black-Box Adversarial Attack on Time Series Using Simulated Annealing Classification and Post-Processing Based Defense
by Sichen Liu and Yuan Luo
Electronics 2024, 13(3), 650; https://doi.org/10.3390/electronics13030650 - 4 Feb 2024
Cited by 1 | Viewed by 599
Abstract
While deep neural networks (DNNs) have been widely and successfully used for time series classification (TSC) over the past decade, their vulnerability to adversarial attacks has received little attention. Most existing attack methods focus on white-box setups, which are unrealistic as attackers typically [...] Read more.
While deep neural networks (DNNs) have been widely and successfully used for time series classification (TSC) over the past decade, their vulnerability to adversarial attacks has received little attention. Most existing attack methods focus on white-box setups, which are unrealistic as attackers typically only have access to the model’s probability outputs. Defensive methods also have limitations, relying primarily on adversarial retraining which degrades classification accuracy and requires excessive training time. On top of that, we propose two new approaches in this paper: (1) A simulated annealing-based random search attack that finds adversarial examples without gradient estimation, searching only on the l-norm hypersphere of allowable perturbations. (2) A post-processing defense technique that periodically reverses the trend of corresponding loss values while maintaining the overall trend, using only the classifier’s confidence scores as input. Experiments applying these methods to InceptionNet models trained on the UCR dataset benchmarks demonstrate the effectiveness of the attack, achieving up to 100% success rates. The defense method provided protection against up to 91.24% of attacks while preserving prediction quality. Overall, this work addresses important gaps in adversarial TSC by introducing novel black-box attack and lightweight defense techniques. Full article
Show Figures

Figure 1

55 pages, 1876 KiB  
Review
A Survey on Video Streaming for Next-Generation Vehicular Networks
by Chenn-Jung Huang, Hao-Wen Cheng, Yi-Hung Lien and Mei-En Jian
Electronics 2024, 13(3), 649; https://doi.org/10.3390/electronics13030649 - 4 Feb 2024
Viewed by 1309
Abstract
As assisted driving technology advances and vehicle entertainment systems rapidly develop, future vehicles will become mobile cinemas, where passengers can use various multimedia applications in the car. In recent years, the progress in multimedia technology has given rise to immersive video experiences. In [...] Read more.
As assisted driving technology advances and vehicle entertainment systems rapidly develop, future vehicles will become mobile cinemas, where passengers can use various multimedia applications in the car. In recent years, the progress in multimedia technology has given rise to immersive video experiences. In addition to conventional 2D videos, 360° videos are gaining popularity, and volumetric videos, which can offer users a better immersive experience, have been discussed. However, these applications place high demands on network capabilities, leading to a dependence on next-generation wireless communication technology to address network bottlenecks. Therefore, this study provides an exhaustive overview of the latest advancements in video streaming over vehicular networks. First, we introduce related work and background knowledge, and provide an overview of recent developments in vehicular networking and video types. Next, we detail various video processing technologies, including the latest released standards. Detailed explanations are provided for network strategies and wireless communication technologies that can optimize video transmission in vehicular networks, paying special attention to the relevant literature regarding the current development of 6G technology that is applied to vehicle communication. Finally, we proposed future research directions and challenges. Building upon the technologies introduced in this paper and considering diverse applications, we suggest a suitable vehicular network architecture for next-generation video transmission. Full article
(This article belongs to the Special Issue Featured Review Papers in Electrical and Autonomous Vehicles)
Show Figures

Figure 1

25 pages, 563 KiB  
Review
A Survey on Challenges and Advances in Natural Language Processing with a Focus on Legal Informatics and Low-Resource Languages
by Panteleimon Krasadakis, Evangelos Sakkopoulos and Vassilios S. Verykios
Electronics 2024, 13(3), 648; https://doi.org/10.3390/electronics13030648 - 4 Feb 2024
Viewed by 1124
Abstract
The field of Natural Language Processing (NLP) has experienced significant growth in recent years, largely due to advancements in Deep Learning technology and especially Large Language Models. These improvements have allowed for the development of new models and architectures that have been successfully [...] Read more.
The field of Natural Language Processing (NLP) has experienced significant growth in recent years, largely due to advancements in Deep Learning technology and especially Large Language Models. These improvements have allowed for the development of new models and architectures that have been successfully applied in various real-world applications. Despite this progress, the field of Legal Informatics has been slow to adopt these techniques. In this study, we conducted an extensive literature review of NLP research focused on legislative documents. We present the current state-of-the-art NLP tasks related to Law Consolidation, highlighting the challenges that arise in low-resource languages. Our goal is to outline the difficulties faced by this field and the methods that have been developed to overcome them. Finally, we provide examples of NLP implementations in the legal domain and discuss potential future directions. Full article
(This article belongs to the Special Issue Generative AI and Its Transformative Potential)
Show Figures

Figure 1

14 pages, 7528 KiB  
Article
Optimal Power Allocation in Optical GEO Satellite Downlinks Using Model-Free Deep Learning Algorithms
by Theodore T. Kapsis, Nikolaos K. Lyras and Athanasios D. Panagopoulos
Electronics 2024, 13(3), 647; https://doi.org/10.3390/electronics13030647 - 4 Feb 2024
Viewed by 634
Abstract
Geostationary (GEO) satellites are employed in optical frequencies for a variety of satellite services providing wide coverage and connectivity. Multi-beam GEO high-throughput satellites offer Gbps broadband rates and, jointly with low-Earth-orbit mega-constellations, are anticipated to enable a large-scale free-space optical (FSO) network. In [...] Read more.
Geostationary (GEO) satellites are employed in optical frequencies for a variety of satellite services providing wide coverage and connectivity. Multi-beam GEO high-throughput satellites offer Gbps broadband rates and, jointly with low-Earth-orbit mega-constellations, are anticipated to enable a large-scale free-space optical (FSO) network. In this paper, a power allocation methodology based on deep reinforcement learning (DRL) is proposed for optical satellite systems disregarding any channel statistics knowledge requirements. An all-FSO, multi-aperture GEO-to-ground system is considered and an ergodic capacity optimization problem for the downlink is formulated with transmitted power constraints. A power allocation algorithm was developed, aided by a deep neural network (DNN) which is fed channel state information (CSI) observations and trained in a parameterized on-policy manner through a stochastic policy gradient approach. The proposed method does not require the channels’ transition models or fading distributions. To validate and test the proposed allocation scheme, experimental measurements from the European Space Agency’s ARTEMIS optical satellite campaign were utilized. It is demonstrated that the predicted average capacity greatly exceeds other baseline heuristic algorithms while strongly converging to the supervised, unparameterized approach. The predicted average channel powers differ only by 0.1 W from the reference ones, while the baselines differ significantly more, about 0.1–0.5 W. Full article
(This article belongs to the Special Issue New Advances of Microwave and Optical Communication)
Show Figures

Figure 1

15 pages, 2775 KiB  
Article
VF-Mask-Net: A Visual Field Noise Reduction Method Using Neural Networks
by Zhenyu Zhang, Haogang Zhu and Lei Li
Electronics 2024, 13(3), 646; https://doi.org/10.3390/electronics13030646 - 4 Feb 2024
Viewed by 484
Abstract
Visual Field (VF) measurements, crucial for diagnosing and treating glaucoma, often contain noise originating from both the instrument and subjects during the response process. This study proposes a neural network-based denoising method for VF data, obviating the need for ground truth labels or [...] Read more.
Visual Field (VF) measurements, crucial for diagnosing and treating glaucoma, often contain noise originating from both the instrument and subjects during the response process. This study proposes a neural network-based denoising method for VF data, obviating the need for ground truth labels or paired measurements. Using a mask-imposed VF as an input for the neural network, while the original VF serves as a training label, we evaluated performance metrics such as the accuracy, precision, and sensitivity of denoised VFs. Orthogonal experiments were also employed to assess the impact of mask number, mask structure, and replacement strategy on model accuracy. This study reveals that mask number, replacement strategy, and their interaction significantly affect the accuracy of the denoising model. Under recommended parameters, VF-Mask-Net effectively enhances the accuracy and precision of VF measurements. Furthermore, in deterioration detection tasks, denoised VFs display heightened sensitivity compared to their pre-denoising counterparts. Full article
Show Figures

Figure 1

22 pages, 10524 KiB  
Article
Implementation of Constant Temperature–Constant Voltage Charging Method with Energy Loss Minimization for Lithium-Ion Batteries
by Guan-Jhu Chen, Chun-Liang Liu, Yi-Hua Liu and Jhih-Jhong Wang
Electronics 2024, 13(3), 645; https://doi.org/10.3390/electronics13030645 - 4 Feb 2024
Viewed by 871
Abstract
Effective charging techniques must consider factors such as charging efficiency, lifecycle, charging time (CT), and battery temperature. Currently, most charging strategies primarily focus on CT and charging losses (CL), overlooking the crucial influence of battery temperature on battery life. [...] Read more.
Effective charging techniques must consider factors such as charging efficiency, lifecycle, charging time (CT), and battery temperature. Currently, most charging strategies primarily focus on CT and charging losses (CL), overlooking the crucial influence of battery temperature on battery life. Therefore, this study proposes a constant temperature–constant voltage (CT-CV) charging method based on minimizing energy losses. The charging process is primarily divided into three stages. In the first stage, a constant current (CC) charging is implemented using a 2C rate that aims to expedite battery charging but may result in a rapid temperature increase. The second stage involves constant temperature charging, where the charging current is regulated based on battery temperature feedback using a PID controller to maintain a stable battery temperature. The third stage is constant voltage (CV) charging, where a fixed current is applied continuously until the current drops below the charging cutoff current. After completion of the charging process, the charging time can be calculated, and charging losses can be determined by incorporating the battery equivalent circuit model (ECM). To determine the optimal transition time, the paper employs Coulomb counting and the battery ECM, considering both CT and losses to simulate the transition time with minimal CL. This approach achieves optimization of transition points by establishing ECM, measuring internal impedance of the battery, and simulating various charging scenarios, and eliminates the need for multiple actual experiments. Experimental results show that the charging time (CT) should be reduced and the maximum temperature rise (TR) should be reduced under the same average TR condition of the proposed method. At the same CT, the average TR and the maximum TR should both decrease. The charging method proposed in this study exhibits the following advantages: (1) simultaneous consideration of the battery’s equivalent circuit model and charging time; (2) the achieved transition point demonstrates characteristics of minimized charging losses; (3) eliminates the need for multiple experimental iterations. Full article
Show Figures

Graphical abstract

17 pages, 6881 KiB  
Article
AE-Qdrop: Towards Accurate and Efficient Low-Bit Post-Training Quantization for A Convolutional Neural Network
by Jixing Li, Gang Chen, Min Jin, Wenyu Mao and Huaxiang Lu
Electronics 2024, 13(3), 644; https://doi.org/10.3390/electronics13030644 - 4 Feb 2024
Cited by 1 | Viewed by 759
Abstract
Blockwise reconstruction with adaptive rounding helps achieve acceptable 4-bit post-training quantization accuracy. However, adaptive rounding is time intensive, and the optimization space of weight elements is constrained to a binary set, thus limiting the performance of quantized models. The optimality of block-wise reconstruction [...] Read more.
Blockwise reconstruction with adaptive rounding helps achieve acceptable 4-bit post-training quantization accuracy. However, adaptive rounding is time intensive, and the optimization space of weight elements is constrained to a binary set, thus limiting the performance of quantized models. The optimality of block-wise reconstruction requires that subsequent network blocks remain unquantized. To address this, we propose a two-stage post-training quantization scheme, AE-Qdrop, encompassing block-wise reconstruction and global fine-tuning. In the block-wise reconstruction stage, a progressive optimization strategy is introduced as a replacement for adaptive rounding, enhancing both quantization accuracy and efficiency. Additionally, the integration of randomly weighted quantized activation helps mitigate the risk of overfitting. In the global fine-tuning stage, the weights of each quantized network block are corrected simultaneously through logit matching and feature matching. Experiments in image classification and object detection tasks validate that AE-Qdrop achieves high precision and efficient quantization. For the 2-bit MobileNetV2, AE-Qdrop outperforms Qdrop in quantization accuracy by 6.26%, and its quantization efficiency is fivefold higher. Full article
Show Figures

Figure 1

24 pages, 2641 KiB  
Article
Analysis of Distance and Environmental Impact on UAV Acoustic Detection
by Diana Tejera-Berengue, Fangfang Zhu-Zhou, Manuel Utrilla-Manso, Roberto Gil-Pita and Manuel Rosa-Zurera
Electronics 2024, 13(3), 643; https://doi.org/10.3390/electronics13030643 - 4 Feb 2024
Viewed by 817
Abstract
This article explores the challenge of acoustic drone detection in real-world scenarios, with an emphasis on the impact of distance, to see how sound propagation affects drone detection. Learning machines of varying complexity are used for detection, ranging from simpler methods such as [...] Read more.
This article explores the challenge of acoustic drone detection in real-world scenarios, with an emphasis on the impact of distance, to see how sound propagation affects drone detection. Learning machines of varying complexity are used for detection, ranging from simpler methods such as linear discriminant, multilayer perceptron, support vector machines, and random forest to more complex approaches based on deep neural networks like YAMNet. Our evaluation meticulously assesses the performance of these methods using a carefully curated database of a wide variety of drones and interference sounds. This database, processed through array signal processing and influenced by ambient noise, provides a realistic basis for our analyses. For this purpose, two different training strategies are explored. In the first approach, the learning machines are trained with unattenuated signals, aiming to preserve the inherent information of the sound sources. Subsequently, testing is then carried out under attenuated conditions at various distances, with interfering sounds. In this scenario, effective detection is achieved up to 200 m, which is particularly notable with the linear discriminant method. The second strategy involves training and testing with attenuated signals to consider different distances from the source. This strategy significantly extends the effective detection ranges, reaching up to 300 m for most methods and up to 500 m for the YAMNet-based detector. Additionally, this approach raises the possibility of having specialized detectors for specific distance ranges, significantly expanding the range of effective drone detection. Our study highlights the potential of drone acoustic detection at different distances and encourages further exploration in this research area. Unique contributions include the discovery that training with attenuated signals with a worse signal-to-noise ratio allows the improvement of the general performance of learning machine-based detectors, increasing the effective detection range achieved, and the feasibility of real-time detection, even with very complex learning machines, opening avenues for practical applications in real-world surveillance scenarios. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

21 pages, 5724 KiB  
Article
A Hybrid Model for Prognostic and Health Management of Electronic Devices
by Alessandro Murgia, Chaitra Harsha, Elena Tsiporkova, Chinmay Nawghane and Bart Vandevelde
Electronics 2024, 13(3), 642; https://doi.org/10.3390/electronics13030642 - 3 Feb 2024
Viewed by 652
Abstract
Techniques for prognostic and health management are becoming common in the electronic domain to reduce the cost of failures. Typically, the proposed techniques rely either on physics-based or data-driven models. Only a few studies explored hybrid models to combine the advantages of both [...] Read more.
Techniques for prognostic and health management are becoming common in the electronic domain to reduce the cost of failures. Typically, the proposed techniques rely either on physics-based or data-driven models. Only a few studies explored hybrid models to combine the advantages of both models. For this reason, this work investigates the potential of hybrid modeling by presenting a new framework for the diagnostics and prognostics of an electronic system. The methodology is validated on simulation data describing the behavior of a QFN package subject to die delamination. The main results of this work are twofold. First, it is shown that the hybrid model can achieve better performance than the performance obtained by either the physics-based or the data-driven models alone. Second, a baseline is set for the best performance achievable by the hybrid model, allowing us to estimate the remaining useful life of the package. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

19 pages, 4169 KiB  
Article
Compensation of Current Sensor Faults in Vector-Controlled Induction Motor Drive Using Extended Kalman Filters
by Teresa Orlowska-Kowalska, Magdalena Miniach and Michal Adamczyk
Electronics 2024, 13(3), 641; https://doi.org/10.3390/electronics13030641 - 3 Feb 2024
Viewed by 708
Abstract
In electric drive systems, one of the most common faults is related to measurement equipment, including current sensors (CSs). As information about the stator current is crucial to ensure precise control of AC drives, such a fault significantly affects the quality and security [...] Read more.
In electric drive systems, one of the most common faults is related to measurement equipment, including current sensors (CSs). As information about the stator current is crucial to ensure precise control of AC drives, such a fault significantly affects the quality and security of the entire system. For this reason, a modified extended Kalman filter (EKF) has been presented in this paper as an algorithmic solution to restore stator current in the event of CS failure. In order to minimize the impact of rotor and stator resistance variations on the quality of the estimation, the proposed model includes an estimation of the general coefficient of their changes. In contrast to solutions known in the literature, the presented model considers changes in both resistances in the form of a single coefficient. This approach allows us to maintain a low order of the estimator (fifth) and thus minimize the tendency to system instability and decrease computation burden. Extensive simulation tests have shown a significant improvement in the accuracy of stator current estimation under both motor and regenerating modes, a wide speed range (1–100%), and changes in motor parameters. Full article
(This article belongs to the Special Issue State-of-the-Art Research in Systems and Control Engineering)
Show Figures

Figure 1

26 pages, 352 KiB  
Review
Combining Machine Learning and Edge Computing: Opportunities, Challenges, Platforms, Frameworks, and Use Cases
by Piotr Grzesik and Dariusz Mrozek
Electronics 2024, 13(3), 640; https://doi.org/10.3390/electronics13030640 - 3 Feb 2024
Viewed by 2117
Abstract
In recent years, we have been observing the rapid growth and adoption of IoT-based systems, enhancing multiple areas of our lives. Concurrently, the utilization of machine learning techniques has surged, often for similar use cases as those seen in IoT systems. In this [...] Read more.
In recent years, we have been observing the rapid growth and adoption of IoT-based systems, enhancing multiple areas of our lives. Concurrently, the utilization of machine learning techniques has surged, often for similar use cases as those seen in IoT systems. In this survey, we aim to focus on the combination of machine learning and the edge computing paradigm. The presented research commences with the topic of edge computing, its benefits, such as reduced data transmission, improved scalability, and reduced latency, as well as the challenges associated with this computing paradigm, like energy consumption, constrained devices, security, and device fleet management. It then presents the motivations behind the combination of machine learning and edge computing, such as the availability of more powerful edge devices, improving data privacy, reducing latency, or lowering reliance on centralized services. Then, it describes several edge computing platforms, with a focus on their capability to enable edge intelligence workflows. It also reviews the currently available edge intelligence frameworks and libraries, such as TensorFlow Lite or PyTorch Mobile. Afterward, the paper focuses on the existing use cases for edge intelligence in areas like industrial applications, healthcare applications, smart cities, environmental monitoring, or autonomous vehicles. Full article
(This article belongs to the Special Issue Towards Efficient and Reliable AI at the Edge)
Show Figures

Figure 1

Previous Issue
Back to TopTop