Next Issue
Volume 13, April-1
Previous Issue
Volume 13, March-1
 
 

Electronics, Volume 13, Issue 6 (March-2 2024) – 163 articles

Cover Story (view full-size image): The 8b/10b IBM encoding scheme is used in many communication technologies, including USB, Gigabit Ethernet, and Serial ATA. We propose two primitive-based structural designs of an 8b/10b encoder and two of an 8b/10b decoder, all targeted at modern AMD FPGA architectures. We aim to reduce the resource usage. We compare our designs with implementations resulting from behavioral models and state-of-the-art solutions from the literature. The implementation results show that our solutions provide the lowest resource utilization with comparable maximum-operating frequency and power consumption. The proposed structural designs are suitable for resource-constrained data communication protocol implementations that employ the IBM 8b/10b encoding scheme. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 3460 KiB  
Article
Reservoir Computing Using Measurement-Controlled Quantum Dynamics
by A. H. Abbas and Ivan S. Maksymov
Electronics 2024, 13(6), 1164; https://doi.org/10.3390/electronics13061164 - 21 Mar 2024
Viewed by 612
Abstract
Physical reservoir computing (RC) is a machine learning algorithm that employs the dynamics of a physical system to forecast highly nonlinear and chaotic phenomena. In this paper, we introduce a quantum RC system that employs the dynamics of a probed atom in a [...] Read more.
Physical reservoir computing (RC) is a machine learning algorithm that employs the dynamics of a physical system to forecast highly nonlinear and chaotic phenomena. In this paper, we introduce a quantum RC system that employs the dynamics of a probed atom in a cavity. The atom experiences coherent driving at a particular rate, leading to a measurement-controlled quantum evolution. The proposed quantum reservoir can make fast and reliable forecasts using a small number of artificial neurons compared with the traditional RC algorithm. We theoretically validate the operation of the reservoir, demonstrating its potential to be used in error-tolerant applications, where approximate computing approaches may be used to make feasible forecasts in conditions of limited computational and energy resources. Full article
Show Figures

Figure 1

20 pages, 31479 KiB  
Article
Design of Lossless Negative Capacitance Multiplier Employing a Single Active Element
by Mutasem Vahbeh, Emre Özer and Fırat Kaçar
Electronics 2024, 13(6), 1163; https://doi.org/10.3390/electronics13061163 - 21 Mar 2024
Viewed by 509
Abstract
In this paper, a new negative lossless grounded capacitance multiplier (GCM) circuit based on a Current Feedback Operational Amplifier (CFOA) is presented. The proposed circuit includes a single CFOA, four resistors, and a grounded capacitor. In order to reduce the power consumption, the [...] Read more.
In this paper, a new negative lossless grounded capacitance multiplier (GCM) circuit based on a Current Feedback Operational Amplifier (CFOA) is presented. The proposed circuit includes a single CFOA, four resistors, and a grounded capacitor. In order to reduce the power consumption, the internal structure of the CFOA is realized with dynamic threshold-voltage MOSFET (DTMOS) transistors. The effects of parasitic components on the operating frequency range of the proposed circuit are investigated. The simulation results were obtained with the SPICE program using 0.13 µm IBM CMOS technology parameters. The total power consumption of the circuit was 1.6 mW. The functionality of the circuit is provided by the capacitance cancellation circuit. PVT (Process, Voltage, Temperature) analyses were performed to verify the robustness of the proposed circuit. An experimental study is provided to verify the operability of the proposed negative lossless GCM using commercially available integrated circuits (ICs). Full article
Show Figures

Figure 1

16 pages, 3788 KiB  
Article
An Ultra-Low-Power 65 nm Single-Tank 24.5-to-29.1 GHz Gm-Enhanced CMOS LC VCO Achieving 195.2 dBc/Hz FoM at 1 MHz
by Abdullah Kurtoglu, Amir H. M. Shirazi, Shahriar Mirabbasi and Hossein Miri Lavasani
Electronics 2024, 13(6), 1162; https://doi.org/10.3390/electronics13061162 - 21 Mar 2024
Viewed by 548
Abstract
A low-power single-core 24.5-to-29.1 GHz CMOS LC voltage-controlled oscillator (VCO) is presented. The proposed VCO uses an innovative differential cross-coupled architecture in which an additional pair is connected to the main pair to increase the effective transconductance, resulting in lower power consumption and [...] Read more.
A low-power single-core 24.5-to-29.1 GHz CMOS LC voltage-controlled oscillator (VCO) is presented. The proposed VCO uses an innovative differential cross-coupled architecture in which an additional pair is connected to the main pair to increase the effective transconductance, resulting in lower power consumption and reduced phase noise (PN). The proposed VCO is fabricated in a 1P9M standard CMOS process and sustains oscillation at 29.14 GHz with power consumption as low as 455 μW (650 μA from a 0.7 V supply), which is ~20% lower than a conventional CMOS LC VCO without Gm-enhanced differential pairs built through the same process (700 μA from 0.8 V supply). When consuming 880 μW (1.1 mA from 0.8 V), the proposed VCO exhibits a tuning range of 4.6 GHz (from 24.5 GHz to 29.1 GHz). Moreover, it exhibits a measured phase noise (PN) better than −106.5 dBc/Hz @ 1 MHz and −132.0 dBc/Hz @ 10 MHz, with figure-of-merit (FoM) results of 195.2 dBc/Hz and 200.3 dBc/Hz, respectively. Full article
Show Figures

Figure 1

23 pages, 1573 KiB  
Article
Autonomous Threat Response at the Edge Processing Level in the Industrial Internet of Things
by Grzegorz Czeczot, Izabela Rojek and Dariusz Mikołajewski
Electronics 2024, 13(6), 1161; https://doi.org/10.3390/electronics13061161 - 21 Mar 2024
Cited by 1 | Viewed by 540
Abstract
Industrial Internet of Things (IIoT) technology, as a subset of the Internet of Things (IoT) in the concept of Industry 4.0 and, in the future, 5.0, will face the challenge of streamlining the way huge amounts of data are processed by the modules [...] Read more.
Industrial Internet of Things (IIoT) technology, as a subset of the Internet of Things (IoT) in the concept of Industry 4.0 and, in the future, 5.0, will face the challenge of streamlining the way huge amounts of data are processed by the modules that collect the data and those that analyse the data. Given the key features of these analytics, such as reducing the cost of building massive data centres and finding the most efficient way to process data flowing from hundreds of nodes simultaneously, intermediary devices are increasingly being used in this process. Fog and edge devices are hardware devices designed to pre-analyse terabytes of data in a stream and decide in realtime which data to send for final analysis, without having to send the data to a central processing unit in huge local data centres or to an expensive cloud. As the number of nodes sending data for analysis via collection and processing devices increases, so does the risk of data streams being intercepted. There is also an increased risk of attacks on this sensitive infrastructure. Maintaining the integrity of this infrastructure is important, and the ability to analyse all data is a resource that must be protected. The aim of this paper is to address the problem of autonomous threat detection and response at the interface of sensors, edge devices, cloud devices with historical data, and finally during the data collection process in data centres. Ultimately, we would like to present a machine learning algorithm with reinforcements adapted to detect threats and immediately isolate infected nests. Full article
(This article belongs to the Special Issue Advances in Mobile Networked Systems)
Show Figures

Figure 1

15 pages, 5446 KiB  
Article
A Novel Series 24-Pulse Rectifier Operating in Low Harmonic State Based on Auxiliary Passive Injection at DC Side
by Xiaoqiang Chen, Tun Bai, Ying Wang, Jiangyun Gong, Xiuqing Mu and Zhanning Chang
Electronics 2024, 13(6), 1160; https://doi.org/10.3390/electronics13061160 - 21 Mar 2024
Viewed by 490
Abstract
To reduce the current harmonics on the input side of a multi-pulse rectifier, this paper proposes a low harmonic current source series multi-pulse rectifier based on an auxiliary passive injection circuit at the DC side. The rectifier only needs to add an auxiliary [...] Read more.
To reduce the current harmonics on the input side of a multi-pulse rectifier, this paper proposes a low harmonic current source series multi-pulse rectifier based on an auxiliary passive injection circuit at the DC side. The rectifier only needs to add an auxiliary passive injection circuit on the DC side of the series 12-pulse rectifier, which can change its AC input voltage from 12-step waves to 24-step waves. We analyzed the working mode of the rectifier, optimized the optimal turn ratio of the injection transformer from the perspective of minimizing the total harmonic distortion (THD) value of the input voltage on the AC side, and analyzed the diode open circuit fault in the auxiliary passive injection circuit. Test verification shows that, after using the passive harmonic injection circuit, the THD value of the input voltage of the AC side of the rectifier is reduced from 14.03% to 4.86%. The THD value of the input current is reduced from 5.30% to 2.16%. The input power factor has been increased from 98.86% to 99.83%, and the power quality has been improved. Full article
(This article belongs to the Special Issue Advanced Technologies in Power Electronics and Electric Drives)
Show Figures

Figure 1

16 pages, 3364 KiB  
Article
Defect Identification of XLPE Power Cable Using Harmonic Visualized Characteristics of Grounding Current
by Minxin Wang, Yong Liu, Youcong Huang, Yuepeng Xin, Tao Han and Boxue Du
Electronics 2024, 13(6), 1159; https://doi.org/10.3390/electronics13061159 - 21 Mar 2024
Viewed by 560
Abstract
This paper proposes an online monitoring and defect identification method for XLPE power cables using harmonic visualization of grounding currents. Four typical defects, including thermal aging, water ingress and dampness, insulation scratch, and excessive bending, were experimentally conducted. The AC grounding currents of [...] Read more.
This paper proposes an online monitoring and defect identification method for XLPE power cables using harmonic visualization of grounding currents. Four typical defects, including thermal aging, water ingress and dampness, insulation scratch, and excessive bending, were experimentally conducted. The AC grounding currents of the cable specimens with different defects were measured during operation. By using the chaotic synchronization system, the harmonic distortion was transformed into a 2D scatter diagram with distinctive characteristics. The relationship between the defect type and the diagram features was obtained. A YOLOv5 (you only look once v5) target recognition model was then established based on the dynamic harmonics scatter diagrams for cable defect classification and identification. The results indicated that the overall shape, distribution range, density degree, and typical lines formed by scatter aggregation can reflect the defect type effectively. The proposed method greatly reduces the difficulty of data analysis and enables rapid defect identification of XLPE power cables, which is useful for improving the reliability of the power system. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

12 pages, 8185 KiB  
Article
Augmented Reality Visualization and Quantification of COVID-19 Infections in the Lungs
by Jiaqing Liu, Liang Lyu, Shurong Chai, Huimin Huang, Fang Wang, Tomoko Tateyama, Lanfen Lin and Yenwei Chen
Electronics 2024, 13(6), 1158; https://doi.org/10.3390/electronics13061158 - 21 Mar 2024
Viewed by 576
Abstract
The ongoing COVID-19 pandemic has had a significant impact globally, and the understanding of the disease’s clinical features and impacts remains insufficient. An important metric to evaluate the severity of pneumonia in COVID-19 is the CT Involvement Score (CTIS), which is determined by [...] Read more.
The ongoing COVID-19 pandemic has had a significant impact globally, and the understanding of the disease’s clinical features and impacts remains insufficient. An important metric to evaluate the severity of pneumonia in COVID-19 is the CT Involvement Score (CTIS), which is determined by assessing the proportion of infections in the lung field region using computed tomography (CT) images. Interactive augmented reality visualization and quantification of COVID-19 infection from CT allow us to augment the traditional diagnostic techniques and current COVID-19 treatment strategies. Thus, in this paper, we present a system that combines augmented reality (AR) hardware, specifically the Microsoft HoloLens, with deep learning algorithms in a user-oriented pipeline to provide medical staff with an intuitive 3D augmented reality visualization of COVID-19 infections in the lungs. The proposed system includes a graph-based pyramid global context reasoning module to segment COVID-19-infected lung regions, which can then be visualized using the HoloLens AR headset. Through segmentation, we can quantitatively evaluate and intuitively visualize which part of the lung is infected. In addition, by evaluating the infection status in each lobe quantitatively, it is possible to assess the infection severity. We also implemented Spectator View and Sharing a Scene functions into the proposed system, which enable medical staff to present the AR content to a wider audience, e.g., radiologists. By providing a 3D perception of the complexity of COVID-19, the augmented reality visualization generated by the proposed system offers an immersive experience in an interactive and cooperative 3D approach. We expect that this will facilitate a better understanding of CT-guided COVID-19 diagnosis and treatment, as well as improved patient outcomes. Full article
Show Figures

Figure 1

37 pages, 6046 KiB  
Article
Data-Driven Controller for Drivers’ Steering-Wheel Operating Behaviour in Haptic Assistive Driving System
by Simplice Igor Noubissie Tientcheu, Shengzhi Du, Karim Djouani and Qingxue Liu
Electronics 2024, 13(6), 1157; https://doi.org/10.3390/electronics13061157 - 21 Mar 2024
Viewed by 512
Abstract
An advanced driver-assistance system (ADAS) is critical to driver–vehicle-interaction systems. Driving behaviour modelling and control significantly improves the global performance of ADASs. A haptic assistive system assists the driver by providing a specific torque on the steering wheel according to the driving–vehicle–road profile [...] Read more.
An advanced driver-assistance system (ADAS) is critical to driver–vehicle-interaction systems. Driving behaviour modelling and control significantly improves the global performance of ADASs. A haptic assistive system assists the driver by providing a specific torque on the steering wheel according to the driving–vehicle–road profile to improve the steering control. However, the main problem is designing a compensator dealing with the high-level uncertainties in different driving scenarios with haptic driver assistance, where different personalities and diverse perceptions of drivers are considered. These differences can lead to poor driving performance if not properly accounted for. This paper focuses on designing a data-driven model-free compensator considering various driving behaviours with a haptic feedback system. A backpropagation neural network (BPNN) models driving behaviour based on real driving data (speed, acceleration, vehicle orientation, and current steering angle). Then, the genetic algorithm (GA) optimises the integral time absolute error (ITEA) function to produce the best multiple PID compensation parameters for various driving behaviours (such as speeding/braking, lane-keeping and turning), which are then utilised by the fuzzy logic to provide different driving commands. An experiment was conducted with five participants in a driving simulator. During the second experiment, seven participants drove in the simulator to evaluate the robustness of the proposed combined GA proportional-integral-derivative (PID) offline, and the fuzzy-PID controller applied online. The third experiment was conducted to validate the proposed data-driven controller. The experiment and simulation results evaluated the ITAE of the lateral displacement and yaw angle during various driving behaviours. The results validated the proposed method by significantly enhancing the driving performance. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

25 pages, 815 KiB  
Article
Enhancing Safety in IoT Systems: A Model-Based Assessment of a Smart Irrigation System Using Fault Tree Analysis
by Alhassan Abdulhamid, Md Mokhlesur Rahman, Sohag Kabir and Ibrahim Ghafir
Electronics 2024, 13(6), 1156; https://doi.org/10.3390/electronics13061156 - 21 Mar 2024
Viewed by 673
Abstract
The agricultural industry has the potential to undergo a revolutionary transformation with the use of Internet of Things (IoT) technology. Crop monitoring can be improved, waste reduced, and efficiency increased. However, there are risks associated with system failures that can lead to significant [...] Read more.
The agricultural industry has the potential to undergo a revolutionary transformation with the use of Internet of Things (IoT) technology. Crop monitoring can be improved, waste reduced, and efficiency increased. However, there are risks associated with system failures that can lead to significant losses and food insecurity. Therefore, a proactive approach is necessary to ensure the effective safety assessment of new IoT systems before deployment. It is crucial to identify potential causes of failure and their severity from the conceptual design phase of the IoT system within smart agricultural ecosystems. This will help prevent such risks and ensure the safety of the system. This study examines the failure behaviour of IoT-based Smart Irrigation Systems (SIS) to identify potential causes of failure. This study proposes a comprehensive Model-Based Safety Analysis (MBSA) framework to model the failure behaviour of SIS and generate analysable safety artefacts of the system using System Modelling Language (SysML). The MBSA approach provides meticulousness to the analysis, supports model reuse, and makes the development of a Fault Tree Analysis (FTA) model easier, thereby reducing the inherent limitations of informal system analysis. The FTA model identifies component failures and their propagation, providing a detailed understanding of how individual component failures can lead to the overall failure of the SIS. This study offers valuable insights into the interconnectedness of various component failures by evaluating the SIS failure behaviour through the FTA model. This study generates multiple minimal cut sets, which provide actionable insights into designing dependable IoT-based SIS. This analysis identifies potential weak points in the design and provides a foundation for safety risk mitigation strategies. This study emphasises the significance of a systematic and model-driven approach to improving the dependability of IoT systems in agriculture, ensuring sustainable and safe implementation. Full article
(This article belongs to the Collection Electronics for Agriculture)
Show Figures

Figure 1

12 pages, 5256 KiB  
Article
FuNet: Multi-Feature Fusion for Point Cloud Completion Network
by Keming Li, Weiren Zhao, Junjie Liu, Jiahui Wang, Hui Zhang and Huan Jiang
Electronics 2024, 13(6), 1155; https://doi.org/10.3390/electronics13061155 - 21 Mar 2024
Viewed by 450
Abstract
The densification of a point cloud is a crucial challenge in visual applications, particularly when estimating a complete and dense point cloud from a local and incomplete one. This paper introduces a point cloud completion network named FuNet to address this issue. Current [...] Read more.
The densification of a point cloud is a crucial challenge in visual applications, particularly when estimating a complete and dense point cloud from a local and incomplete one. This paper introduces a point cloud completion network named FuNet to address this issue. Current point cloud completion networks adopt various methodologies, including point-based processing and convolution-based processing. Unlike traditional shape completion approaches, FuNet combines point-based processing and convolution-based processing to extract their features, and fuses them through an attention module to generate a complete point cloud from 1024 points to 16,384 points. The experimental results show that when comparing the optimal completion networks, FuNet decreases the CD by 5.17% and increases the F-score by 4.75% on the ShapeNet dataset. In addition, FuNet achieves better results in most categories on a small sample dataset. Full article
Show Figures

Figure 1

14 pages, 16502 KiB  
Article
A Low-Intensity Pulsed Ultrasound Interface ASIC for Wearable Medical Therapeutic Device Applications
by Xuanjie Ye, Xiaoxue Jiang, Shuren Wang and Jie Chen
Electronics 2024, 13(6), 1154; https://doi.org/10.3390/electronics13061154 - 21 Mar 2024
Viewed by 534
Abstract
Low-intensity pulsed ultrasound (LIPUS) is a non-invasive medical therapy that has attracted recent research interest due to its therapeutic effects. However, most LIPUS driver systems currently available are large and expensive. We have proposed a LIPUS interface application-specific integrated circuit (ASIC) for use [...] Read more.
Low-intensity pulsed ultrasound (LIPUS) is a non-invasive medical therapy that has attracted recent research interest due to its therapeutic effects. However, most LIPUS driver systems currently available are large and expensive. We have proposed a LIPUS interface application-specific integrated circuit (ASIC) for use in wearable medical devices to address some of the challenges related to the size and cost of the current technologies. The proposed ASIC is a highly integrated system, incorporating a DCDC module based on a charge pump architecture, a high voltage level shifter, a half-bridge driver, a voltage-controlled oscillator, and a corresponding digital circuit module. Consequently, the functional realization of this ASIC as a LIPUS driver system requires only a few passive components. Experimental tests indicated that the chip is capable of an output of 184.2 mW or 107.2 mW with a power supply of 5 V or 3.7 V, respectively, and its power conversion efficiency is approximately 30%. This power output capacity allows the LIPUS driver system to deliver a spatial average temporal average (SATA) of 29.5 mW/cm2 or 51.6 mW/cm2 with a power supply of 3.7 V or 5 V, respectively. The total die area, including pads, is 4 mm2. The ASIC does not require inductors, improving its magnetic resonance imaging (MRI) compatibility. In summary, the proposed LIPUS interface chip presents a promising solution for the development of MRI-compatible and cost-effective wearable medical therapy devices. Full article
Show Figures

Figure 1

25 pages, 1040 KiB  
Article
VConMC: Enabling Consistency Verification for Distributed Systems Using Implementation-Level Model Checkers and Consistency Oracles
by Beom-Heyn Kim
Electronics 2024, 13(6), 1153; https://doi.org/10.3390/electronics13061153 - 21 Mar 2024
Viewed by 472
Abstract
Many cloud services are relying on distributed key-value stores such as ZooKeeper, Cassandra, HBase, etc. However, distributed key-value stores are notoriously difficult to design and implement without any mistakes. Because data consistency is the contract for clients that defines what the correct values [...] Read more.
Many cloud services are relying on distributed key-value stores such as ZooKeeper, Cassandra, HBase, etc. However, distributed key-value stores are notoriously difficult to design and implement without any mistakes. Because data consistency is the contract for clients that defines what the correct values to read are for a given history of operations under a specific consistency model, consistency violations can confuse client applications by showing invalid values. As a result, serious consequences such as data loss, data corruption, and unexpected behavior of client applications can occur. Software bugs are one of main reasons why consistency violations may occur. Formal verification techniques may be used to make designs correct and minimize the risks of having bugs in the implementation. However, formal verification is not a panacea due to limitations such as the cost of verification, inability to verify existing implementations, and human errors involved. Implementation-level model checking has been heavily explored by researchers for the past decades to formally verify whether the underlying implementation of distributed systems have bugs or not. Nevertheless, previous proposals are limited because their invariant checking is not versatile enough to check for the wide spectrum of consistency models, from eventual consistency to strong consistency. In this work, consistency oracles are employed for consistency invariant checking that can be used by implementation-level model checkers to formally verify data consistency model implementations of distributed key-value stores. To integrate consistency oracles with implementation-level distributed system model checkers, the partial-order information obtained via API is leveraged to avoid the exhaustive search during consistency invariant checking. Our evaluation results show that, by using the proposed method for consistency invariant checking, our prototype model checker, VConMC, can detect consistency violations caused by several real-world software bugs in a well-known distributed key-value store, ZooKeeper. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

14 pages, 6376 KiB  
Article
CSAPSO-BPNN-Based Modeling of End Airbag Stiffness of Nursing Transfer Robot
by Teng Liu, Xinlong Li, Kaicheng Qi, Zhong Zhang, Yunxuan Xiao and Shijie Guo
Electronics 2024, 13(6), 1152; https://doi.org/10.3390/electronics13061152 - 21 Mar 2024
Viewed by 430
Abstract
The use of nursing transfer robots is a vital solution to the problem of daily mobility difficulties for semi-disabilities. However, the fact that care-receivers have different physical characteristics leads to force concentration during human–robot interaction, which affects their comfort. To address this problem, [...] Read more.
The use of nursing transfer robots is a vital solution to the problem of daily mobility difficulties for semi-disabilities. However, the fact that care-receivers have different physical characteristics leads to force concentration during human–robot interaction, which affects their comfort. To address this problem, this study installs an array of double wedge-shaped airbags onto the end-effector of a robot, and analyses airbag mechanical properties. Firstly, this study performed the mechanical testing and data collection of the airbag, including its external load and displacement, at various gas masses. Then, the performance of the Back Propagation (BP) neural network is improved using chaos (C) theory and simulated annealing particle swarm optimization (SAPSO), resulting in the establishment of the CSAPSO-BP neural network. By this method, a fitting model is developed to determine the mechanical parameters of the wedge-shaped airbag stiffness, and the fitting relation of external load–displacement is obtained. Data analyses show that the wedge-shaped airbag stiffness increases quadratically, linearly, and with a constant rate as the gas mass increases. The airbag stiffness regulation and model describe its three distinct phases with quadratic, linear, and linear invariant characteristics as the gas mass changes. These findings contribute to the structural optimization of airbags. Full article
Show Figures

Figure 1

19 pages, 6211 KiB  
Article
Energy Efficient Graph-Based Hybrid Learning for Speech Emotion Recognition on Humanoid Robot
by Haowen Wu, Hanyue Xu, Kah Phooi Seng, Jieli Chen and Li Minn Ang
Electronics 2024, 13(6), 1151; https://doi.org/10.3390/electronics13061151 - 21 Mar 2024
Viewed by 515
Abstract
This paper presents a novel deep graph-based learning technique for speech emotion recognition which has been specifically tailored for energy efficient deployment within humanoid robots. Our methodology represents a fusion of scalable graph representations, rooted in the foundational principles of graph signal processing [...] Read more.
This paper presents a novel deep graph-based learning technique for speech emotion recognition which has been specifically tailored for energy efficient deployment within humanoid robots. Our methodology represents a fusion of scalable graph representations, rooted in the foundational principles of graph signal processing theories. By delving into the utilization of cycle or line graphs as fundamental constituents shaping a robust Graph Convolution Network (GCN)-based architecture, we propose an approach which allows the capture of relationships between speech signals to decode intricate emotional patterns and responses. Our methodology is validated and benchmarked against established databases such as IEMOCAP and MSP-IMPROV. Our model outperforms standard GCNs and prevalent deep graph architectures, demonstrating performance levels that align with state-of-the-art methodologies. Notably, our model achieves this feat while significantly reducing the number of learnable parameters, thereby increasing computational efficiency and bolstering its suitability for resource-constrained environments. This proposed energy-efficient graph-based hybrid learning methodology is applied towards multimodal emotion recognition within humanoid robots. Its capacity to deliver competitive performance while streamlining computational complexity and energy efficiency represents a novel approach in evolving emotion recognition systems, catering to diverse real-world applications where precision in emotion recognition within humanoid robots stands as a pivotal requisite. Full article
(This article belongs to the Special Issue Green Artificial Intelligence: Theory and Applications)
Show Figures

Figure 1

17 pages, 1833 KiB  
Article
Fuzzy Inference Systems to Fine-Tune a Local Eigenvector Image Smoothing Method
by Khleef Almutairi, Samuel Morillas and Pedro Latorre-Carmona
Electronics 2024, 13(6), 1150; https://doi.org/10.3390/electronics13061150 - 21 Mar 2024
Viewed by 465
Abstract
Image denoising is a fundamental research topic in colour image processing, analysis, and transmission. Noise is an inevitable byproduct of image acquisition and transmission, and its nature is intimately linked to the underlying processes that produce it. Gaussian noise is a particularly prevalent [...] Read more.
Image denoising is a fundamental research topic in colour image processing, analysis, and transmission. Noise is an inevitable byproduct of image acquisition and transmission, and its nature is intimately linked to the underlying processes that produce it. Gaussian noise is a particularly prevalent type of noise that necessitates effective removal while ensuring the preservation of the original image’s quality. This paper presents a colour image denoising framework that integrates fuzzy inference systems (FISs) with eigenvector analysis. This framework employs eigenvector analysis to extract relevant information from local image neighbourhoods. This information is subsequently fed into the FIS system which dynamically adjusts the intensity of the denoising process based on local characteristics. This approach recognizes that homogeneous areas may require less aggressive smoothing than detailed image regions. Images are converted from the RGB domain to an eigenvector-based space for smoothing and then converted back to the RGB domain. The effectiveness of the proposed methods is established through the application of various image quality metrics and visual comparisons against established state-of-the-art techniques. Full article
(This article belongs to the Special Issue Artificial Intelligence in Image Processing and Computer Vision)
Show Figures

Figure 1

20 pages, 928 KiB  
Article
Text-Centric Multimodal Contrastive Learning for Sentiment Analysis
by Heng Peng, Xue Gu, Jian Li, Zhaodan Wang and Hao Xu
Electronics 2024, 13(6), 1149; https://doi.org/10.3390/electronics13061149 - 21 Mar 2024
Viewed by 587
Abstract
Multimodal sentiment analysis aims to acquire and integrate sentimental cues from different modalities to identify the sentiment expressed in multimodal data. Despite the widespread adoption of pre-trained language models in recent years to enhance model performance, current research in multimodal sentiment analysis still [...] Read more.
Multimodal sentiment analysis aims to acquire and integrate sentimental cues from different modalities to identify the sentiment expressed in multimodal data. Despite the widespread adoption of pre-trained language models in recent years to enhance model performance, current research in multimodal sentiment analysis still faces several challenges. Firstly, although pre-trained language models have significantly elevated the density and quality of text features, the present models adhere to a balanced design strategy that lacks a concentrated focus on textual content. Secondly, prevalent feature fusion methods often hinge on spatial consistency assumptions, neglecting essential information about modality interactions and sample relationships within the feature space. In order to surmount these challenges, we propose a text-centric multimodal contrastive learning framework (TCMCL). This framework centers around text and augments text features separately from audio and visual perspectives. In order to effectively learn feature space information from different cross-modal augmented text features, we devised two contrastive learning tasks based on instance prediction and sentiment polarity; this promotes implicit multimodal fusion and obtains more abstract and stable sentiment representations. Our model demonstrates performance that surpasses the current state-of-the-art methods on both the CMU-MOSI and CMU-MOSEI datasets. Full article
Show Figures

Figure 1

14 pages, 10485 KiB  
Article
Investigation on Synaptic Adaptation and Fatigue in ZnO/HfZrO-Based Memristors under Continuous Electrical Pulse Stimulation
by Zeyang Xiang, Kexiang Wang, Jie Lu, Zixuan Wang, Huilin Jin, Ranping Li, Mengrui Shi, Liuxuan Wu, Fuyu Yan and Ran Jiang
Electronics 2024, 13(6), 1148; https://doi.org/10.3390/electronics13061148 - 21 Mar 2024
Viewed by 465
Abstract
This study investigates the behavior of memristive devices characterized by oxygen-deficient ZnO and HfZrO films under continuous pulse stimulation. This dynamic reflects the adaptability observed in neural synapses when repeatedly subjected to stress, ultimately resulting in a mitigated response to pressure. Observations show [...] Read more.
This study investigates the behavior of memristive devices characterized by oxygen-deficient ZnO and HfZrO films under continuous pulse stimulation. This dynamic reflects the adaptability observed in neural synapses when repeatedly subjected to stress, ultimately resulting in a mitigated response to pressure. Observations show that the conductivity of memristors increases with the augmentation of continuous electrical pulses. However, the momentum of this growth trend gradually diminishes, highlighting the devices’ capability to adapt to repetitive pressure. This adjustment correlates with the transition of biological synapses from short-term to persistent memory stages, aligning with the principles of the Ebbinghaus memory model. The architecture of memristors, integrating ZnO and HfZrO in a layered manner, holds promising prospects in replicating the inherent synaptic features found in biological organisms. Full article
Show Figures

Figure 1

25 pages, 4974 KiB  
Article
Augmented Reality in Industry 4.0 Assistance and Training Areas: A Systematic Literature Review and Bibliometric Analysis
by Ginés Morales Méndez and Francisco del Cerro Velázquez
Electronics 2024, 13(6), 1147; https://doi.org/10.3390/electronics13061147 - 21 Mar 2024
Viewed by 724
Abstract
Augmented reality (AR) technology is making a strong appearance on the industrial landscape, driven by significant advances in technological tools and developments. Its application in areas such as training and assistance has attracted the attention of the research community, which sees AR as [...] Read more.
Augmented reality (AR) technology is making a strong appearance on the industrial landscape, driven by significant advances in technological tools and developments. Its application in areas such as training and assistance has attracted the attention of the research community, which sees AR as an opportunity to provide operators with a more visual, immersive and interactive environment. This article deals with an analysis of the integration of AR in the context of the fourth industrial revolution, commonly referred to as Industry 4.0. Starting with a systematic review, 60 relevant studies were identified from the Scopus and Web of Science databases. These findings were used to build bibliometric networks, providing a broad perspective on AR applications in training and assistance in the context of Industry 4.0. The article presents the current landscape, existing challenges and future directions of AR research applied to industrial training and assistance based on a systematic literature review and citation network analysis. The findings highlight a growing trend in AR research, with a particular focus on addressing and overcoming the challenges associated with its implementation in complex industrial environments. Full article
(This article belongs to the Special Issue Perception and Interaction in Mixed, Augmented, and Virtual Reality)
Show Figures

Figure 1

28 pages, 1060 KiB  
Article
Reducing the Length of Dynamic and Relevant Slices by Pruning Boolean Expressions
by Thomas Hirsch and Birgit Hofer
Electronics 2024, 13(6), 1146; https://doi.org/10.3390/electronics13061146 - 20 Mar 2024
Viewed by 548
Abstract
Dynamic and relevant (backward) slicing helps programmers in the debugging process by reducing the number of statements in an execution trace. In this paper, we propose an approach called pruned slicing, which can further reduce the size of slices by reasoning over Boolean [...] Read more.
Dynamic and relevant (backward) slicing helps programmers in the debugging process by reducing the number of statements in an execution trace. In this paper, we propose an approach called pruned slicing, which can further reduce the size of slices by reasoning over Boolean expressions. It adds only those parts of a Boolean expression that are responsible for the evaluation outcome of the Boolean expression to the set of relevant variables. We empirically evaluate our approach and compare it to dynamic and relevant slicing using three small benchmarks: the traffic collision avoidance system (TCAS), the Refactory dataset, and QuixBugs. Pruned slicing reduces the size of the TCAS slices on average by 10.2%, but it does not reduce the slice sizes of the Refactory and QuixBugs programs. The times required for computing pruned dynamic and relevant slices are comparable to the computation times of non-pruned dynamic and relevant slices. Thus, pruned slicing is an extension of dynamic and relevant slicing that can reduce the size of slices while having a negligible computational overhead. Full article
(This article belongs to the Special Issue Program Slicing and Source Code Analysis: Methods and Applications)
Show Figures

Figure 1

23 pages, 11434 KiB  
Article
Noncontact Automatic Water-Level Assessment and Prediction in an Urban Water Stream Channel of a Volcanic Island Using Deep Learning
by Fábio Mendonça, Sheikh Shanawaz Mostafa, Fernando Morgado-Dias, Joaquim Amândio Azevedo, Antonio G. Ravelo-García and Juan L. Navarro-Mesa
Electronics 2024, 13(6), 1145; https://doi.org/10.3390/electronics13061145 - 20 Mar 2024
Viewed by 499
Abstract
Traditional methods for water-level measurement usually employ permanent structures, such as a scale built into the water system, which is costly and laborious and can wash away with water. This research proposes a low-cost, automatic water-level estimator that can appraise the level without [...] Read more.
Traditional methods for water-level measurement usually employ permanent structures, such as a scale built into the water system, which is costly and laborious and can wash away with water. This research proposes a low-cost, automatic water-level estimator that can appraise the level without disturbing water flow or affecting the environment. The estimator was developed for urban areas of a volcanic island water channel, using machine learning to evaluate images captured by a low-cost remote monitoring system. For this purpose, images from over one year were collected. For better performance, captured images were processed by converting them to a proposed color space, named HLE, composed of hue, lightness, and edge. Multiple residual neural network architectures were examined. The best-performing model was ResNeXt, which achieved a mean absolute error of 1.14 cm using squeeze and excitation and data augmentation. An explainability analysis was carried out for transparency and a visual explanation. In addition, models were developed to predict water levels. Three models successfully forecasted the subsequent water levels for 10, 60, and 120 min, with mean absolute errors of 1.76 cm, 2.09 cm, and 2.34 cm, respectively. The models could follow slow and fast transitions, leading to a potential flooding risk-assessment mechanism. Full article
(This article belongs to the Special Issue Selected Papers from Young Researchers in AI for Computer Vision)
Show Figures

Figure 1

14 pages, 2887 KiB  
Article
Maximum Principle in Autonomous Multi-Object Safe Trajectory Optimization
by Józef Andrzej Lisowski
Electronics 2024, 13(6), 1144; https://doi.org/10.3390/electronics13061144 - 20 Mar 2024
Viewed by 423
Abstract
The following article presents the task of optimizing the control of an autonomous object within a group of other passing objects using Pontryagin’s bounded maximum principle. The basis of this principle is a multidimensional nonlinear model of the control process, with state constraints [...] Read more.
The following article presents the task of optimizing the control of an autonomous object within a group of other passing objects using Pontryagin’s bounded maximum principle. The basis of this principle is a multidimensional nonlinear model of the control process, with state constraints reflecting the motion of passing objects. The analytical synthesis of optimal multi-object control became the basis for the algorithm for determining the optimal and safe object trajectory. Simulation tests of the algorithm on the example of real navigation situations with various numbers of objects illustrate their safe trajectories in changing environmental conditions. The optimal object trajectory obtained using Pontryagin’s maximum principle was compared with the trajectory calculated using the Bellman dynamic programming method. The analysis of the research allowed for the formulation of valuable conclusions and a plan for further research in the field of autonomous vehicle control optimization. The maximum principle algorithm allows one to take into account a larger number of objects whose data are derived from ARPA anti-collision radar systems. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

0 pages, 17000 KiB  
Article
A Hybrid-Model-Based CNC Machining Trajectory Error Prediction and Compensation Method
by Wuwei He, Lipeng Zhang, Yi Hu, Zheng Zhou, Yusong Qiao and Dong Yu
Electronics 2024, 13(6), 1143; https://doi.org/10.3390/electronics13061143 - 20 Mar 2024
Viewed by 554
Abstract
Intelligent manufacturing is the main direction of Industry 4.0, pointing towards the future development of manufacturing. The core component of intelligent manufacturing is the computer numerical control (CNC) system. Predicting and compensating for machining trajectory errors by controlling the CNC system’s accuracy is [...] Read more.
Intelligent manufacturing is the main direction of Industry 4.0, pointing towards the future development of manufacturing. The core component of intelligent manufacturing is the computer numerical control (CNC) system. Predicting and compensating for machining trajectory errors by controlling the CNC system’s accuracy is of great significance in enhancing the efficiency, quality, and flexibility of intelligent manufacturing. Traditional machining trajectory error prediction and compensation methods make it challenging to consider the uncertainties that occur during the machining process, and they cannot meet the requirements of intelligent manufacturing with respect to the complexity and accuracy of process parameter optimization. In this paper, we propose a hybrid-model-based machining trajectory error prediction and compensation method to address these issues. Firstly, a digital twin framework for the CNC system, based on a hybrid model, was constructed. The machining trajectory error prediction and compensation mechanisms were then analyzed, and an artificial intelligence (AI) algorithm was used to predict the machining trajectory error. This error was then compensated for via the adaptive compensation method. Finally, the feasibility and effectiveness of the method were verified through specific experiments, and a realization case for this digital-twin-driven machining trajectory error prediction and compensation method was provided. Full article
(This article belongs to the Special Issue Advances in Embedded Deep Learning Systems)
Show Figures

Figure 1

24 pages, 7362 KiB  
Article
A Novel Distributed Adaptive Controller for Multi-Agent Systems with Double-Integrator Dynamics: A Hedging-Based Approach
by Atahan Kurttisi, Kadriye Merve Dogan and Benjamin Charles Gruenwald
Electronics 2024, 13(6), 1142; https://doi.org/10.3390/electronics13061142 - 20 Mar 2024
Viewed by 452
Abstract
In this paper, we focus on designing a model reference adaptive control-based distributed control law to drive a set of agents with double-integrator dynamics in a leader–follower fashion in the presence of system anomalies such as agent-based uncertainties, unknown control effectiveness, and actuator [...] Read more.
In this paper, we focus on designing a model reference adaptive control-based distributed control law to drive a set of agents with double-integrator dynamics in a leader–follower fashion in the presence of system anomalies such as agent-based uncertainties, unknown control effectiveness, and actuator dynamics. In particular, we introduce a novel hedging-based reference model with second-order dynamics to allow an adaptation in the presence of actuator dynamics. We show the stability of the overall closed-loop multi-agent system by utilizing the Lyapunov Stability Theory, where we analyze the stability condition by using the Linear Matrix Inequalities method to show the boundedness of the reference model and actuator states. Finally, we illustrate the efficacy of the proposed distributed adaptive controller on an undirected and connected line graph in five cases. Full article
(This article belongs to the Special Issue Networked Robotics and Control Systems)
Show Figures

Figure 1

13 pages, 4454 KiB  
Article
A High-Precision Fall Detection Model Based on Dynamic Convolution in Complex Scenes
by Yong Qin, Wuqing Miao and Chen Qian
Electronics 2024, 13(6), 1141; https://doi.org/10.3390/electronics13061141 - 20 Mar 2024
Viewed by 585
Abstract
Falls can cause significant harm, and even death, to elderly individuals. Therefore, it is crucial to have a highly accurate fall detection model that can promptly detect and respond to changes in posture. The YOLOv8 model may not effectively address the challenges posed [...] Read more.
Falls can cause significant harm, and even death, to elderly individuals. Therefore, it is crucial to have a highly accurate fall detection model that can promptly detect and respond to changes in posture. The YOLOv8 model may not effectively address the challenges posed by deformation, different scale targets, and occlusion in complex scenes during human falls. This paper presented ESD-YOLO, a new high-precision fall detection model based on dynamic convolution that improves upon the YOLOv8 model. The C2f module in the backbone network was replaced with the C2Dv3 module to enhance the network’s ability to capture complex details and deformations. The Neck section used the DyHead block to unify multiple attentional operations, enhancing the detection accuracy of targets at different scales and improving performance in cases of occlusion. Additionally, the algorithm proposed in this paper utilized the loss function EASlideloss to increase the model’s focus on hard samples and solve the problem of sample imbalance. The experimental results demonstrated a 1.9% increase in precision, a 4.1% increase in recall, a 4.3% increase in mAP0.5, and a 2.8% increase in mAP0.5:0.95 compared to YOLOv8. Specifically, it has significantly improved the precision of human fall detection in complex scenes. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Deep Learning and Its Applications)
Show Figures

Figure 1

12 pages, 602 KiB  
Article
Entropy Model of Rosin Autonomous Boolean Network Digital True Random Number Generator
by Yi Zong, Lihua Dong and Xiaoxin Lu
Electronics 2024, 13(6), 1140; https://doi.org/10.3390/electronics13061140 - 20 Mar 2024
Viewed by 433
Abstract
A True Random Number Generator (TRNG) is an important component in cryptographic algorithms and protocols. The Rosin Autonomous Boolean Network (ABN) digital TRNG has been widely studied due to its nice properties, such as low energy consumption, high speed, strong platform portability, and [...] Read more.
A True Random Number Generator (TRNG) is an important component in cryptographic algorithms and protocols. The Rosin Autonomous Boolean Network (ABN) digital TRNG has been widely studied due to its nice properties, such as low energy consumption, high speed, strong platform portability, and strong randomness. However, there is still a lack of suitable entropy models to deduce the requirement of design parameters to ensure true randomness. The current model to evaluate the entropy of oscillator-based TRNGs is not applicable for Rosin ABN TRNGs due to low-frequency noise. This work presents a new, suitable stochastic model to evaluate the entropy of Rosin ABN TRNGs. Theoretical analysis and simulation experiments verify the correctness and the effectiveness of the model, and, finally, the appropriate sampling parameters for Rosin ABN TRNGs are given for sufficient entropy per random bit to ensure true randomness. Full article
(This article belongs to the Special Issue Recent Advances and Applications of Network Security and Cryptography)
Show Figures

Figure 1

23 pages, 2104 KiB  
Article
Modeling and Analyzing Reaction Systems in Maude
by Demis Ballis, Linda Brodo and Moreno Falaschi
Electronics 2024, 13(6), 1139; https://doi.org/10.3390/electronics13061139 - 20 Mar 2024
Viewed by 430
Abstract
Reaction Systems (RSs) are a successful computational framework for modeling systems inspired by biochemistry. An RS defines a set of rules (reactions) over a finite set of entities (e.g., molecules, proteins, genes, etc.). A computation in this system is performed by rewriting a [...] Read more.
Reaction Systems (RSs) are a successful computational framework for modeling systems inspired by biochemistry. An RS defines a set of rules (reactions) over a finite set of entities (e.g., molecules, proteins, genes, etc.). A computation in this system is performed by rewriting a finite set of entities (a computation state) using all the enabled reactions in the RS, thereby producing a new set of entities (a new computation state). The number of entities in the reactions and in the computation states can be large, making the analysis of RS behavior difficult without a proper automated support. In this paper, we use the Maude language—a programming language based on rewriting logic—to define a formal executable semantics for RSs, which can be used to precisely simulate the system behavior as well as to perform reachability analysis over the system computation space. Then, by enriching the proposed semantics, we formalize a forward slicer algorithm for RSs that allows us to observe the evolution of the system on both the initial input and a fragment of it (the slicing criterion), thus facilitating the detection of forward causality and influence relations due to the absence/presence of some entities in the slicing criterion. The pursued approach is illustrated by a biological reaction system that models a gene regulation network for controlling the process of differentiation of T helper lymphocytes. Full article
(This article belongs to the Special Issue Program Slicing and Source Code Analysis: Methods and Applications)
Show Figures

Figure 1

13 pages, 1897 KiB  
Article
Driver Abnormal Expression Detection Method Based on Improved Lightweight YOLOv5
by Keming Yao, Zhongzhou Wang, Fuao Guo and Feng Li
Electronics 2024, 13(6), 1138; https://doi.org/10.3390/electronics13061138 - 20 Mar 2024
Viewed by 562
Abstract
The rapid advancement of intelligent assisted driving technology has significantly enhanced transportation convenience in society and contributed to the mitigation of traffic safety hazards. Addressing the potential for drivers to experience abnormal physical conditions during the driving process, an enhanced lightweight network model [...] Read more.
The rapid advancement of intelligent assisted driving technology has significantly enhanced transportation convenience in society and contributed to the mitigation of traffic safety hazards. Addressing the potential for drivers to experience abnormal physical conditions during the driving process, an enhanced lightweight network model based on YOLOv5 for detecting abnormal facial expressions of drivers is proposed in this paper. Initially, the lightweighting of the YOLOv5 backbone network is achieved by integrating the FasterNet Block, a lightweight module from the FasterNet network, with the C3 module in the main network. This combination forms the C3-faster module. Subsequently, the original convolutional modules in the YOLOv5 model are replaced with the improved GSConvns module to reduce computational load. Building upon the GSConvns module, the VoV-GSCSP module is constructed to ensure the lightweighting of the neck network while maintaining detection accuracy. Finally, channel pruning and fine-tuning operations are applied to the entire model. Channel pruning involves removing channels with minimal impact on output results, further reducing the model’s computational load, parameters, and size. The fine-tuning operation compensates for any potential loss in detection accuracy. Experimental results demonstrate that the proposed model achieves a substantial reduction in both parameter count and computational load while maintaining a high detection accuracy of 84.5%. The improved model has a compact size of only 4.6 MB, making it more conducive to the efficient operation of onboard computers. Full article
(This article belongs to the Special Issue Advances of Artificial Intelligence and Vision Applications)
Show Figures

Figure 1

49 pages, 1033 KiB  
Article
A Novel Authentication Scheme Based on Verifiable Credentials Using Digital Identity in the Context of Web 3.0
by Stefania Loredana Nita and Marius Iulian Mihailescu
Electronics 2024, 13(6), 1137; https://doi.org/10.3390/electronics13061137 - 20 Mar 2024
Viewed by 622
Abstract
This paper explores the concept of digital identity in the evolving landscape of Web 3.0, focusing on the development and implications of a novel authentication scheme using verifiable credentials. The background sets the stage by placing digital identity within the broad context of [...] Read more.
This paper explores the concept of digital identity in the evolving landscape of Web 3.0, focusing on the development and implications of a novel authentication scheme using verifiable credentials. The background sets the stage by placing digital identity within the broad context of Web 3.0′s decentralized, blockchain-based internet, highlighting the transition from earlier web paradigms. The methods section outlines the theoretical framework and technologies employed, such as blockchain, smart contracts, and cryptographic algorithms. The results summarize the main findings, including the proposed authentication scheme’s ability to enhance user control, security, and privacy in digital interactions. Finally, the conclusions discuss the broader implications of this scheme for future online transactions and digital identity management, emphasizing the shift towards self-sovereignty and reduced reliance on centralized authorities. Full article
(This article belongs to the Special Issue Applied Cryptography and Practical Cryptoanalysis for Web 3.0)
Show Figures

Figure 1

22 pages, 3168 KiB  
Article
Secure Change Control for Supply Chain Systems via Dynamic Event Triggered Using Reinforcement Learning under DoS Attacks
by Lingling Fan, Bolin Zhang, Shuangshuang Xiong and Qingkui Li
Electronics 2024, 13(6), 1136; https://doi.org/10.3390/electronics13061136 - 20 Mar 2024
Viewed by 421
Abstract
In this paper, a distributed secure change control scheme for supply chain systems is presented under denial-of-service (DoS) attacks. To eliminate the effect of DoS attacks on supply chain systems, a secure change compensation is designed. A distributed policy iteration method is established [...] Read more.
In this paper, a distributed secure change control scheme for supply chain systems is presented under denial-of-service (DoS) attacks. To eliminate the effect of DoS attacks on supply chain systems, a secure change compensation is designed. A distributed policy iteration method is established to approximate the coupled Hamilton–Jacobi–Isaacs (HJI) equations. Based on the established reinforce–critic–actor (RCA) structure using reinforcement learning (RL), the reinforced signals, performance indicators, and disturbance input are proposed to update the traditional time-triggered mechanism, and the control input is proposed to update the dynamic event-triggered mechanism (DETM). Stability is guaranteed based on the Lyapunov method under secure change control. The simulation results for supply chain systems show the effectiveness of the secure change control scheme and verify the results. Full article
Show Figures

Figure 1

15 pages, 2990 KiB  
Article
Pedestrian Trajectory Prediction Based on Motion Pattern De-Perturbation Strategy
by Yingjian Deng, Li Zhang, Jie Chen, Yu Deng, Zhixiang Huang, Yingsong Li, Yice Cao, Zhongcheng Wu and Jun Zhang
Electronics 2024, 13(6), 1135; https://doi.org/10.3390/electronics13061135 - 20 Mar 2024
Viewed by 441
Abstract
Pedestrian trajectory prediction is extremely challenging due to the complex social attributes of pedestrians. Introducing latent vectors to model trajectory multimodality has become the latest mainstream solution idea. However, previous approaches have overlooked the effects of redundancy that arise from the introduction of [...] Read more.
Pedestrian trajectory prediction is extremely challenging due to the complex social attributes of pedestrians. Introducing latent vectors to model trajectory multimodality has become the latest mainstream solution idea. However, previous approaches have overlooked the effects of redundancy that arise from the introduction of latent vectors. Additionally, they often fail to consider the inherent interference of pedestrians with no trajectory history during model training. This results in the model’s inability to fully utilize the training data. Therefore, we propose a two-stage motion pattern de-perturbation strategy, which is a plug-and-play approach that introduces optimization features to model the redundancy effect caused by latent vectors, which helps to eliminate the redundancy effects in the trajectory prediction phase. We also propose loss masks to reduce the interference of invalid data during training to accurately model pedestrian motion patterns with strong physical interpretability. Our comparative experiments on the publicly available ETH and UCY pedestrian trajectory datasets, as well as the Stanford UAV dataset, show that our optimization strategy achieves better pedestrian trajectory prediction accuracies than a range of state-of-the-art baseline models; in particular, our optimization strategy effectively absorbs the training data to assist the baseline models in achieving optimal modeling accuracy. Full article
(This article belongs to the Special Issue Intelligent Mobile Robotic Systems: Decision, Planning and Control)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop