Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1591 KiB  
Article
Understanding Learner Satisfaction in Virtual Learning Environments: Serial Mediation Effects of Cognitive and Social-Emotional Factors
by Xin Yin, Jiakai Zhang, Gege Li and Heng Luo
Electronics 2024, 13(12), 2277; https://doi.org/10.3390/electronics13122277 - 10 Jun 2024
Viewed by 486
Abstract
This study explored the relationship between technology acceptance and learning satisfaction within a virtual learning environment (VLE) with cognitive presence, cognitive engagement, social presence, and emotional engagement as mediators. A total of 237 university students participated and completed a questionnaire after studying in [...] Read more.
This study explored the relationship between technology acceptance and learning satisfaction within a virtual learning environment (VLE) with cognitive presence, cognitive engagement, social presence, and emotional engagement as mediators. A total of 237 university students participated and completed a questionnaire after studying in the Virbela VLE. The results revealed direct and indirect links between technology acceptance and virtual learning satisfaction. The mediation analysis showed the critical mediating roles of cognitive presence and emotional engagement in fostering satisfaction. There also appeared to be a sequential mediating pathway from technology acceptance to learning satisfaction through social presence and emotional engagement. Notably, cognitive engagement and social presence did not have a significant mediating effect on satisfaction. These results provide a supplementary perspective on how technological, cognitive, and emotional factors can enhance student satisfaction in VLEs. The study concludes with several implications for future research and practice of VLEs in higher education. Full article
Show Figures

Figure 1

18 pages, 3974 KiB  
Article
Curved Domains in Magnetics: A Virtual Element Method Approach for the T.E.A.M. 25 Benchmark Problem
by Franco Dassi, Paolo Di Barba and Alessandro Russo
Electronics 2024, 13(11), 2053; https://doi.org/10.3390/electronics13112053 - 24 May 2024
Viewed by 461
Abstract
In this paper, we are interested in solving optimal shape design problems. A critical challenge within this framework is generating the mesh of the computational domain at each optimisation step according to the information provided by the minimising functional. To enhance efficiency, we [...] Read more.
In this paper, we are interested in solving optimal shape design problems. A critical challenge within this framework is generating the mesh of the computational domain at each optimisation step according to the information provided by the minimising functional. To enhance efficiency, we propose a strategy based on the Finite Element Method (FEM) and the Virtual Element Method (VEM). Specifically, we exploit the flexibility of the VEM in dealing with generally shaped polygons, including those with hanging nodes, to update the mesh solely in regions where the shape varies. In the remaining parts of the domain, we employ the FEM, known for its robustness and applicability in such scenarios. We numerically validate the proposed approach on the T.E.A.M. 25 benchmark problem and compare the results obtained with this procedure with those proposed in the literature based solely on the FEM. Moreover, since the T.E.A.M. 25 benchmark problem is also characterised by curved shapes, we utilise the VEM to accurately incorporate these “exact” curves into the discrete solution itself. Full article
(This article belongs to the Section Microelectronics)
Show Figures

Figure 1

21 pages, 4639 KiB  
Article
Enhancing Learning of 3D Model Unwrapping through Virtual Reality Serious Game: Design and Usability Validation
by Bruno Rodriguez-Garcia, José Miguel Ramírez-Sanz, Ines Miguel-Alonso and Andres Bustillo
Electronics 2024, 13(10), 1972; https://doi.org/10.3390/electronics13101972 - 17 May 2024
Viewed by 617
Abstract
Given the difficulty of explaining the unwrapping process through traditional teaching methodologies, this article presents the design, development, and validation of an immersive Virtual Reality (VR) serious game, named Unwrap 3D Virtual: Ready (UVR), aimed at facilitating the learning of unwrapping 3D models. [...] Read more.
Given the difficulty of explaining the unwrapping process through traditional teaching methodologies, this article presents the design, development, and validation of an immersive Virtual Reality (VR) serious game, named Unwrap 3D Virtual: Ready (UVR), aimed at facilitating the learning of unwrapping 3D models. The game incorporates animations to aid users in understanding the unwrapping process, following Mayer’s Cognitive Theory of Multimedia Learning and Gamification principles. Structured into four levels of increasing complexity, users progress through different aspects of 3D model unwrapping, with the final level allowing for result review. A sample of 53 students with experience in 3D modeling was categorized based on device (PC or VR) and previous experience (XP) in VR, resulting in Low-XP, Mid-XP, and High-XP groups. Hierarchical clustering identified three clusters, reflecting varied user behaviors. Results from surveys assessing game experience, presence, and satisfaction show higher immersion reported by VR users despite greater satisfaction being observed in the PC group due to a bug in the VR version. Novice users exhibited higher satisfaction, which was attributed to the novelty effect, while experienced users demonstrated greater control and proficiency. Full article
(This article belongs to the Special Issue Serious Games and Extended Reality (XR))
Show Figures

Figure 1

13 pages, 710 KiB  
Article
Personalized Feedback in Massive Open Online Courses: Harnessing the Power of LangChain and OpenAI API
by Miguel Morales-Chan, Hector R. Amado-Salvatierra, José Amelio Medina, Roberto Barchino, Rocael Hernández-Rizzardini and António Moreira Teixeira
Electronics 2024, 13(10), 1960; https://doi.org/10.3390/electronics13101960 - 16 May 2024
Viewed by 592
Abstract
Studies show that feedback greatly improves student learning outcomes, but achieving this level of personalization at scale is a complex task, especially in the diverse and open environment of Massive Open Online Courses (MOOCs). This research provides a novel method for using cutting-edge [...] Read more.
Studies show that feedback greatly improves student learning outcomes, but achieving this level of personalization at scale is a complex task, especially in the diverse and open environment of Massive Open Online Courses (MOOCs). This research provides a novel method for using cutting-edge artificial intelligence technology to enhance the feedback mechanism in MOOCs. The main goal of this research is to leverage AI’s capabilities to automate and refine the MOOC feedback process, with special emphasis on courses that allow students to learn at their own pace. The combination of LangChain—a cutting-edge framework specifically designed for applications that use language models—with the OpenAI API forms the basis of this work. This integration creates dynamic, scalable, and intelligent environments that can provide students with individualized, insightful feedback. A well-organized assessment rubric directs the feedback system, ensuring that the responses are both tailored to each learner’s unique path and aligned with academic standards and objectives. This initiative uses Generative AI to enhance MOOCs, making them more engaging, responsive, and successful for a diverse, international student body. Beyond mere automation, this technology has the potential to transform fundamentally how learning is supported in digital environments and how feedback is delivered. The initial results demonstrate increased learner satisfaction and progress, thereby validating the effectiveness of personalized feedback powered by AI. Full article
Show Figures

Figure 1

16 pages, 6397 KiB  
Article
Selecting the Best Permanent Magnet Synchronous Machine Design for Use in a Small Wind Turbine
by Marcin Lefik, Anna Firych-Nowacka, Michal Lipian, Malgorzata Brzozowska and Tomasz Smaz
Electronics 2024, 13(10), 1929; https://doi.org/10.3390/electronics13101929 - 15 May 2024
Viewed by 1327
Abstract
The article describes the selection of a permanent magnet synchronous machine design that could be implemented in a small wind turbine designed by the GUST student organization together with researchers working at the Technical University of Lodz. Based on measurements of the characteristics [...] Read more.
The article describes the selection of a permanent magnet synchronous machine design that could be implemented in a small wind turbine designed by the GUST student organization together with researchers working at the Technical University of Lodz. Based on measurements of the characteristics of available machines, eight initial designs of machines with different rotor designs were proposed. The size of the stator, the number of pairs of poles, and the dimensions of the magnets were used as initial parameters of the designed machines. The analysis was carried out about the K-index, the so-called index of benefits. The idea was to make the selected design as efficient as possible while keeping production costs and manufacturing time low. This paper describes how to select the best design of a permanent magnet synchronous generator intended to work with a small wind turbine. All generator parameters were selected keeping in mind the competition requirements, as the designed generator will be used in the author’s wind turbine. Based on the determined characteristics of the generator variants and the value of the K-index, a generator with a latent magnet rotor was selected as the best solution. The aforementioned K-index is a proprietary concept developed for the selection of the most suitable generator design. This paper did not use optimization methods; the analysis was only supported by the K-index. Full article
Show Figures

Figure 1

11 pages, 4986 KiB  
Article
A Multiplexing Optical Temperature Sensing System for Induction Motors Using Few-Mode Fiber Spatial Mode Diversity
by Feng Liu, Tianle Gu and Weicheng Chen
Electronics 2024, 13(10), 1932; https://doi.org/10.3390/electronics13101932 - 15 May 2024
Viewed by 499
Abstract
Induction motors are widely applied in motor drive systems. Effective temperature monitoring is one of the keys to ensuring the reliability and optimal performance of the motors. Therefore, this paper introduces a multiplexed optical temperature sensing system for induction motors based on few-mode [...] Read more.
Induction motors are widely applied in motor drive systems. Effective temperature monitoring is one of the keys to ensuring the reliability and optimal performance of the motors. Therefore, this paper introduces a multiplexed optical temperature sensing system for induction motors based on few-mode fiber (FMF) spatial mode diversity. By using the spatial mode dimension of FMF, fiber Bragg grating (FBG) carried by different spatial modes of optical paths is embedded in different positions of the motor to realize multipoint synchronous multiplexing temperature monitoring. The paper establishes and demonstrates a photonic lantern-based mode division sensing system for motor temperature monitoring. As a proof of concept, the system demonstrates experiments in multiplexed temperature sensing for motor stators using the fundamental mode LP01 and high-order spatial modes LP11, LP21, and LP02. The FBG sensitivity carried by the above mode is 0.0107 nm/°C, 0.0106 nm/°C, 0.0097 nm/°C, and 0.0116 nm/°C, respectively. The dynamic temperature changes in the stator at different positions of the motor under speeds of 1k rpm, 1.5k rpm, 2k rpm with no load, 3 kg load, and 5 kg load, as well as at three specific speed–load combinations of 1.5k rpm_3 kg, 1k rpm_0kg, 2k rpm_5 kg and so on are measured, and the measured results of different spatial modes are compared and analyzed. The findings indicate that different spatial modes can accurately reflect temperature variations at various positions in motor stator winding. Full article
(This article belongs to the Special Issue Sensing Technology and Intelligent Application)
Show Figures

Figure 1

19 pages, 2212 KiB  
Article
Design and Development of Multi-Agent Reinforcement Learning Intelligence on the Robotarium Platform for Embedded System Applications
by Lorenzo Canese, Gian Carlo Cardarilli, Mohammad Mahdi Dehghan Pir, Luca Di Nunzio and Sergio Spanò
Electronics 2024, 13(10), 1819; https://doi.org/10.3390/electronics13101819 - 8 May 2024
Viewed by 491
Abstract
This research explores the use of Q-Learning for real-time swarm (Q-RTS) multi-agent reinforcement learning (MARL) algorithm for robotic applications. This study investigates the efficacy of Q-RTS in the reducing convergence time to a satisfactory movement policy through the successful implementation of four and [...] Read more.
This research explores the use of Q-Learning for real-time swarm (Q-RTS) multi-agent reinforcement learning (MARL) algorithm for robotic applications. This study investigates the efficacy of Q-RTS in the reducing convergence time to a satisfactory movement policy through the successful implementation of four and eight trained agents. Q-RTS has been shown to significantly reduce search time in terms of training iterations, from almost a million iterations with one agent to 650,000 iterations with four agents and 500,000 iterations with eight agents. The scalability of the algorithm was addressed by testing it on several agents’ configurations. A central focus was placed on the design of a sophisticated reward function, considering various postures of the agents and their critical role in optimizing the Q-learning algorithm. Additionally, this study delved into the robustness of trained agents, revealing their ability to adapt to dynamic environmental changes. The findings have broad implications for improving the efficiency and adaptability of robotic systems in various applications such as IoT and embedded systems. The algorithm was tested and implemented using the Georgia Tech Robotarium platform, showing its feasibility for the above-mentioned applications. Full article
(This article belongs to the Special Issue Applied Machine Learning in Intelligent Systems)
Show Figures

Figure 1

18 pages, 64491 KiB  
Article
A 5K Efficient Low-Light Enhancement Model by Estimating Increment between Dark Image and Transmission Map Based on Local Maximum Color Value Prior
by Qikang Deng, Dongwon Choo, Hyochul Ji and Dohoon Lee
Electronics 2024, 13(10), 1814; https://doi.org/10.3390/electronics13101814 - 8 May 2024
Viewed by 605
Abstract
Low-light enhancement (LLE) has seen significant advancements over decades, leading to substantial improvements in image quality that even surpass ground truth. However, these advancements have come with a downside as the models grew in size and complexity, losing their lightweight and real-time capabilities [...] Read more.
Low-light enhancement (LLE) has seen significant advancements over decades, leading to substantial improvements in image quality that even surpass ground truth. However, these advancements have come with a downside as the models grew in size and complexity, losing their lightweight and real-time capabilities crucial for applications like surveillance, autonomous driving, smartphones, and unmanned aerial vehicles (UAVs). To address this challenge, we propose an exceptionally lightweight model with just around 5K parameters, which is capable of delivering high-quality LLE results. Our method focuses on estimating the incremental changes from dark images to transmission maps based on the low maximum color value prior, and we introduce a novel three-channel transmission map to capture more details and information compared to the traditional one-channel transmission map. This innovative design allows for more effective matching of incremental estimation results, enabling distinct transmission adjustments to be applied to the R, G, and B channels of the image. This streamlined approach ensures that our model remains lightweight, making it suitable for deployment on low-performance devices without compromising real-time performance. Our experiments confirm the effectiveness of our model, achieving high-quality LLE comparable to the IAT (local) model. Impressively, our model achieves this level of performance while utilizing only 0.512 GFLOPs and 4.7K parameters, representing just 39.1% of the GFLOPs and 23.5% of the parameters used by the IAT (local) model. Full article
Show Figures

Figure 1

15 pages, 5670 KiB  
Article
Shaping of the Frequency Response of Photoacoustic Cells with Multi-Cavity Structures
by Wiktor Porakowski and Tomasz Starecki
Electronics 2024, 13(9), 1786; https://doi.org/10.3390/electronics13091786 - 6 May 2024
Viewed by 665
Abstract
In the great majority of cases, the design of resonant photoacoustic cells is based on the use of resonators excited at the frequencies of their main resonances. This work presents a solution in which the use of a multi-cavity structure with the appropriate [...] Read more.
In the great majority of cases, the design of resonant photoacoustic cells is based on the use of resonators excited at the frequencies of their main resonances. This work presents a solution in which the use of a multi-cavity structure with the appropriate selection of the mechanical parameters of the cavities and the interconnecting ducts allows for the shaping of the frequency response of the cell. Such solutions may be particularly useful when the purpose of the designed cells is operation at multiple frequencies, e.g., in applications with the simultaneous detection of multiple gaseous compounds. The concept is tested with cells made using 3D printing technology. The measured frequency responses of the tested cells show very good agreement with the simulation results. This allows for an approach in which the development of a cell with the desired frequency response can be initially based on modeling, without the need for the time-consuming and expensive process of manufacturing and measuring numerous modifications of the cell. Full article
Show Figures

Figure 1

22 pages, 5575 KiB  
Article
Advancing into Millimeter Wavelengths for IoT: Multibeam Modified Planar Luneburg Lens Antenna with Porous Plastic Material
by Javad Pourahmadazar, Bal S. Virdee and Tayeb A. Denidni
Electronics 2024, 13(9), 1605; https://doi.org/10.3390/electronics13091605 - 23 Apr 2024
Cited by 1 | Viewed by 621
Abstract
This paper introduces an innovative antenna design utilizing a cylindrical dielectric Luneburg lens tailored for 60 GHz Internet of Things (IoT) applications. To optimize V-band communications, the permittivity of the dielectric medium is strategically adjusted by precisely manipulating the physical porosity. In IoT [...] Read more.
This paper introduces an innovative antenna design utilizing a cylindrical dielectric Luneburg lens tailored for 60 GHz Internet of Things (IoT) applications. To optimize V-band communications, the permittivity of the dielectric medium is strategically adjusted by precisely manipulating the physical porosity. In IoT scenarios, employing a microstrip dipole antenna with an emission pattern resembling cos10 enhances beam illumination within the waveguide, thereby improving communication and sensing capabilities. The refractive index gradient of the Luneburg lens is modified by manipulating the material’s porosity using air holes, prioritizing signal accuracy and reliability. Fabricated with polyimide using 3D printing, the proposed antenna features a slim profile ideal for IoT applications with space constraints, such as smart homes and unmanned aerial vehicles. Its innovative design is underscored by selective laser sintering (SLS), offering scalable and cost-effective production. Measured results demonstrate the antenna’s exceptional performance, surpassing IoT deployment standards. This pioneering approach to designing multibeam Luneburg lens antennas, leveraging 3D printing’s porosity control for millimeter-wave applications, represents a significant advancement in antenna technology with scanning ability between −67 and 67 degrees. It paves the way for enhanced IoT infrastructure characterized by advanced sensing capabilities and improved connectivity. Full article
(This article belongs to the Special Issue Antennas for IoT Devices)
Show Figures

Figure 1

14 pages, 6484 KiB  
Article
Unveiling Acoustic Cavitation Characterization in Opaque Chambers through a Low-Cost Piezoelectric Sensor Approach
by José Fernandes, Paulo J. Ramísio and Hélder Puga
Electronics 2024, 13(8), 1581; https://doi.org/10.3390/electronics13081581 - 20 Apr 2024
Viewed by 666
Abstract
This study investigates the characterization of acoustic cavitation in a water-filled, opaque chamber induced by ultrasonic waves at 20 kHz. It examines the effect of different acoustic radiator geometries on cavitation generation across varying electrical power levels. A cost-effective piezoelectric sensor, precisely positioned, [...] Read more.
This study investigates the characterization of acoustic cavitation in a water-filled, opaque chamber induced by ultrasonic waves at 20 kHz. It examines the effect of different acoustic radiator geometries on cavitation generation across varying electrical power levels. A cost-effective piezoelectric sensor, precisely positioned, quantifies cavitation under assorted power settings. Two acoustic radiator shape configurations, one with holes and another solid, were examined. The piezoelectric sensor demonstrated efficacy, corroborating with existing literature, in measuring acoustic cavitation. This was achieved through the Fast Fourier Transform (FFT) analysis of voltage data, specifically targeting sub-harmonic patterns, thereby providing a robust method for cavitation detection. Results demonstrate that perforated geometries enhance cavitation intensity at lower power levels, while solid shapes predominantly affect cavitation axially, exhibiting decreased activity at minimal power. The findings recommend using two different shape geometries on the acoustic radiator for efficient cavitation detection, highlighting intense cavitation on radial walls and cavitation generation on the bottom. Due to the stochastic nature of cavitation, averaging data is critical. The spatial limitation of the sensor necessitates prioritizing specific areas over complete coverage, with multiple sensors recommended for comprehensive cavitation pattern analysis. Full article
Show Figures

Figure 1

14 pages, 27254 KiB  
Article
GAN-Based Data Augmentation with Vehicle Color Changes to Train a Vehicle Detection CNN
by Aroona Ayub and HyungWon Kim
Electronics 2024, 13(7), 1231; https://doi.org/10.3390/electronics13071231 - 26 Mar 2024
Cited by 2 | Viewed by 545
Abstract
Object detection is a challenging task that requires a lot of labeled data to train convolutional neural networks (CNNs) that can achieve human-level accuracy. However, such data are not easy to obtain, as they involve significant manual work and costs to annotate the [...] Read more.
Object detection is a challenging task that requires a lot of labeled data to train convolutional neural networks (CNNs) that can achieve human-level accuracy. However, such data are not easy to obtain, as they involve significant manual work and costs to annotate the objects in images. Researchers have used traditional data augmentation techniques to increase the amount of training data available to them. A recent trend in object detection is to use generative models to automatically create annotated data that can enrich a training set and improve the performance of the target model. This paper presents a method of training the proposed ColorGAN network, which is used to generate augmented data for the target domain of interest with the least compromise in quality. We demonstrate a method to train a GAN with images of vehicles in different colors. Then, we demonstrate that our ColorGAN can change the color of vehicles of any given vehicle dataset to a set of specified colors, which can serve as an augmented training dataset. Our experimental results show that the augmented dataset generated by the proposed method helps enhance the detection performance of a CNN for applications where the original training data are limited. Our experiments also show that the model can achieve a higher mAP of 76% when the model is trained with augmented images along with the original training dataset. Full article
(This article belongs to the Special Issue New Trends in Artificial Neural Networks and Its Applications)
Show Figures

Figure 1

25 pages, 4974 KiB  
Article
Augmented Reality in Industry 4.0 Assistance and Training Areas: A Systematic Literature Review and Bibliometric Analysis
by Ginés Morales Méndez and Francisco del Cerro Velázquez
Electronics 2024, 13(6), 1147; https://doi.org/10.3390/electronics13061147 - 21 Mar 2024
Cited by 1 | Viewed by 1268
Abstract
Augmented reality (AR) technology is making a strong appearance on the industrial landscape, driven by significant advances in technological tools and developments. Its application in areas such as training and assistance has attracted the attention of the research community, which sees AR as [...] Read more.
Augmented reality (AR) technology is making a strong appearance on the industrial landscape, driven by significant advances in technological tools and developments. Its application in areas such as training and assistance has attracted the attention of the research community, which sees AR as an opportunity to provide operators with a more visual, immersive and interactive environment. This article deals with an analysis of the integration of AR in the context of the fourth industrial revolution, commonly referred to as Industry 4.0. Starting with a systematic review, 60 relevant studies were identified from the Scopus and Web of Science databases. These findings were used to build bibliometric networks, providing a broad perspective on AR applications in training and assistance in the context of Industry 4.0. The article presents the current landscape, existing challenges and future directions of AR research applied to industrial training and assistance based on a systematic literature review and citation network analysis. The findings highlight a growing trend in AR research, with a particular focus on addressing and overcoming the challenges associated with its implementation in complex industrial environments. Full article
(This article belongs to the Special Issue Perception and Interaction in Mixed, Augmented, and Virtual Reality)
Show Figures

Figure 1

17 pages, 6835 KiB  
Article
Grid Forming Technologies to Improve Rate of Change in Frequency and Frequency Nadir: Analysis-Based Replicated Load Shedding Events
by Oscar D. Garzon, Alexandre B. Nassif and Matin Rahmatian
Electronics 2024, 13(6), 1120; https://doi.org/10.3390/electronics13061120 - 19 Mar 2024
Cited by 1 | Viewed by 865
Abstract
Electric power generation is quickly transitioning toward nontraditional inverter-based resources (IBRs). Prevalent devices today are solar PV, wind generators, and battery energy storage systems (BESS) based on electrochemical packs. These IBRs are interconnected throughout the power system via power electronics inverter bridges, which [...] Read more.
Electric power generation is quickly transitioning toward nontraditional inverter-based resources (IBRs). Prevalent devices today are solar PV, wind generators, and battery energy storage systems (BESS) based on electrochemical packs. These IBRs are interconnected throughout the power system via power electronics inverter bridges, which have sophisticated controls. This paper studies the impacts and benefits resulting from the integration of grid forming (GFM) inverters and energy storage on the stability of power systems via replicating real events of loss of generation units that resulted in large load shedding events. First, the authors tuned the power system dynamic model in Power System Simulator for Engineering (PSSE) to replicate the event records and, upon integrating the IBRs, analyzed the system dynamic responses of the BESS. This was conducted for both GFM and grid following (GFL) modes. Additionally, models for Grid Forming Static Synchronous Compensator (GFM STATCOM), were also created and simulated to allow for quantifying the benefits of this technology and a techno-economic analysis compared with GFM BESSs. The results presented in this paper demonstrate the need for industry standardization in the application of GFM inverters to unleash their benefits to the bulk electric grid. The results also demonstrate that the GFM STATCOM is a very capable system that can augment the bulk system inertia, effectively reducing the occurrence of load shedding events. Full article
Show Figures

Figure 1

28 pages, 5284 KiB  
Article
IoT-Based Intrusion Detection System Using New Hybrid Deep Learning Algorithm
by Sami Yaras and Murat Dener
Electronics 2024, 13(6), 1053; https://doi.org/10.3390/electronics13061053 - 12 Mar 2024
Cited by 2 | Viewed by 2256
Abstract
The most significant threat that networks established in IoT may encounter is cyber attacks. The most commonly encountered attacks among these threats are DDoS attacks. After attacks, the communication traffic of the network can be disrupted, and the energy of sensor nodes can [...] Read more.
The most significant threat that networks established in IoT may encounter is cyber attacks. The most commonly encountered attacks among these threats are DDoS attacks. After attacks, the communication traffic of the network can be disrupted, and the energy of sensor nodes can quickly deplete. Therefore, the detection of occurring attacks is of great importance. Considering numerous sensor nodes in the established network, analyzing the network traffic data through traditional methods can become impossible. Analyzing this network traffic in a big data environment is necessary. This study aims to analyze the obtained network traffic dataset in a big data environment and detect attacks in the network using a deep learning algorithm. This study is conducted using PySpark with Apache Spark in the Google Colaboratory (Colab) environment. Keras and Scikit-Learn libraries are utilized in the study. ‘CICIoT2023’ and ‘TON_IoT’ datasets are used for training and testing the model. The features in the datasets are reduced using the correlation method, ensuring the inclusion of significant features in the tests. A hybrid deep learning algorithm is designed using one-dimensional CNN and LSTM. The developed method was compared with ten machine learning and deep learning algorithms. The model’s performance was evaluated using accuracy, precision, recall, and F1 parameters. Following the study, an accuracy rate of 99.995% for binary classification and 99.96% for multiclassification is achieved in the ‘CICIoT2023’ dataset. In the ‘TON_IoT’ dataset, a binary classification success rate of 98.75% is reached. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

21 pages, 3294 KiB  
Review
Optimizing Piezoelectric Energy Harvesting from Mechanical Vibration for Electrical Efficiency: A Comprehensive Review
by Demeke Girma Wakshume and Marek Łukasz Płaczek
Electronics 2024, 13(5), 987; https://doi.org/10.3390/electronics13050987 - 5 Mar 2024
Cited by 3 | Viewed by 2334
Abstract
In the current era, energy resources from the environment via piezoelectric materials are not only used for self-powered electronic devices, but also play a significant role in creating a pleasant living environment. Piezoelectric materials have the potential to produce energy from micro to [...] Read more.
In the current era, energy resources from the environment via piezoelectric materials are not only used for self-powered electronic devices, but also play a significant role in creating a pleasant living environment. Piezoelectric materials have the potential to produce energy from micro to milliwatts of power depending on the ambient conditions. The energy obtained from these materials is used for powering small electronic devices such as sensors, health monitoring devices, and various smart electronic gadgets like watches, personal computers, and cameras. These reviews explain the comprehensive concepts related to piezoelectric (classical and non-classical) materials, energy harvesting from the mechanical vibration of piezoelectric materials, structural modelling, and their optimization. Non-conventional smart materials, such as polyceramics, polymers, or composite piezoelectric materials, stand out due to their slender actuator and sensor profiles, offering superior performance, flexibility, and reliability at competitive costs despite their susceptibility to performance fluctuations caused by temperature variations. Accurate modeling and performance optimization, employing analytical, numerical, and experimental methodologies are imperative. This review also furthers research and development in optimizing piezoelectric energy utilization, suggesting the need for continued experimentation to select optimal materials and structures for various energy applications. Full article
(This article belongs to the Special Issue Energy Harvesting and Storage Technologies)
Show Figures

Figure 1

17 pages, 6522 KiB  
Article
Design of a Convolutional Neural Network Accelerator Based on On-Chip Data Reordering
by Yang Liu, Yiheng Zhang, Xiaoran Hao, Lan Chen, Mao Ni, Ming Chen and Rong Chen
Electronics 2024, 13(5), 975; https://doi.org/10.3390/electronics13050975 - 4 Mar 2024
Cited by 1 | Viewed by 1146
Abstract
Convolutional neural networks have been widely applied in the field of computer vision. In convolutional neural networks, convolution operations account for more than 90% of the total computational workload. The current mainstream approach to achieving high energy-efficient convolution operations is through dedicated hardware [...] Read more.
Convolutional neural networks have been widely applied in the field of computer vision. In convolutional neural networks, convolution operations account for more than 90% of the total computational workload. The current mainstream approach to achieving high energy-efficient convolution operations is through dedicated hardware accelerators. Convolution operations involve a significant amount of weights and input feature data. Due to limited on-chip cache space in accelerators, there is a significant amount of off-chip DRAM memory access involved in the computation process. The latency of DRAM access is 20 times higher than that of SRAM, and the energy consumption of DRAM access is 100 times higher than that of multiply–accumulate (MAC) units. It is evident that the “memory wall” and “power wall” issues in neural network computation remain challenging. This paper presents the design of a hardware accelerator for convolutional neural networks. It employs a dataflow optimization strategy based on on-chip data reordering. This strategy improves on-chip data utilization and reduces the frequency of data exchanges between on-chip cache and off-chip DRAM. The experimental results indicate that compared to the accelerator without this strategy, it can reduce data exchange frequency by up to 82.9%. Full article
(This article belongs to the Special Issue Artificial Intelligence and Signal Processing: Circuits and Systems)
Show Figures

Figure 1

14 pages, 1300 KiB  
Article
Hybrid FSO/RF Communications in Space–Air–Ground Integrated Networks: A Reduced Overhead Link Selection Policy
by Petros S. Bithas, Hector E. Nistazakis, Athanassios Katsis and Liang Yang
Electronics 2024, 13(4), 806; https://doi.org/10.3390/electronics13040806 - 19 Feb 2024
Cited by 2 | Viewed by 794
Abstract
Space–air–ground integrated network (SAGIN) is considered an enabler for sixth-generation (6G) networks. By integrating terrestrial and non-terrestrial (satellite, aerial) networks, SAGIN seems to be a quite promising solution to provide reliable connectivity everywhere and all the time. Its availability can be further enhanced [...] Read more.
Space–air–ground integrated network (SAGIN) is considered an enabler for sixth-generation (6G) networks. By integrating terrestrial and non-terrestrial (satellite, aerial) networks, SAGIN seems to be a quite promising solution to provide reliable connectivity everywhere and all the time. Its availability can be further enhanced if hybrid free space optical (FSO)/radio frequency (RF) links are adopted. In this paper, the performance of a hybrid FSO/RF communication system operating in SAGIN has been analytically evaluated. In the considered system, a high-altitude platform station (HAPS) is used to forward the satellite signal to the ground station. Moreover, the FSO channel model assumed takes into account the turbulence, pointing errors, and path losses, while for the RF links, a relatively new composite fading model has been considered. In this context, a new link selection scheme has been proposed that is designed to reduced the signaling overhead required for the switching operations between the RF and FSO links. The analytical framework that has been developed is based on the Markov chain theory. Capitalizing on this framework, the performance of the system has been investigated using the criteria of outage probability and the average number of link estimations. The numerical results presented reveal that the new selection scheme offers a good compromise between performance and complexity. Full article
Show Figures

Figure 1

34 pages, 3253 KiB  
Review
Review of Industry 4.0 from the Perspective of Automation and Supervision Systems: Definitions, Architectures and Recent Trends
by Francisco Javier Folgado, David Calderón, Isaías González and Antonio José Calderón
Electronics 2024, 13(4), 782; https://doi.org/10.3390/electronics13040782 - 16 Feb 2024
Cited by 9 | Viewed by 2298
Abstract
Industry 4.0 is a new paradigm that is transforming the industrial scenario. It has generated a large amount of scientific studies, commercial equipment and, above all, high expectations. Nevertheless, there is no single definition or general agreement on its implications, specifically in the [...] Read more.
Industry 4.0 is a new paradigm that is transforming the industrial scenario. It has generated a large amount of scientific studies, commercial equipment and, above all, high expectations. Nevertheless, there is no single definition or general agreement on its implications, specifically in the field of automation and supervision systems. In this paper, a review of the Industry 4.0 concept, with equivalent terms, enabling technologies and reference architectures for its implementation, is presented. It will be shown that this paradigm results from the confluence and integration of both existing and disruptive technologies. Furthermore, the most relevant trends in industrial automation and supervision systems are covered, highlighting the convergence of traditional equipment and those characterized by the Internet of Things (IoT). This paper is intended to serve as a reference document as well as a guide for the design and deployment of automation and supervision systems framed in Industry 4.0. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

30 pages, 17457 KiB  
Article
Melanoma Skin Cancer Identification with Explainability Utilizing Mask Guided Technique
by Lahiru Gamage, Uditha Isuranga, Dulani Meedeniya, Senuri De Silva and Pratheepan Yogarajah
Electronics 2024, 13(4), 680; https://doi.org/10.3390/electronics13040680 - 6 Feb 2024
Cited by 4 | Viewed by 1402
Abstract
Melanoma is a highly prevalent and lethal form of skin cancer, which has a significant impact globally. The chances of recovery for melanoma patients substantially improve with early detection. Currently, deep learning (DL) methods are gaining popularity in assisting with the identification of [...] Read more.
Melanoma is a highly prevalent and lethal form of skin cancer, which has a significant impact globally. The chances of recovery for melanoma patients substantially improve with early detection. Currently, deep learning (DL) methods are gaining popularity in assisting with the identification of diseases using medical imaging. The paper introduces a computational model for classifying melanoma skin cancer images using convolutional neural networks (CNNs) and vision transformers (ViT) with the HAM10000 dataset. Both approaches utilize mask-guided techniques, employing a specialized U2-Net segmentation module to generate masks. The CNN-based approach utilizes ResNet50, VGG16, and Xception with transfer learning. The training process is enhanced using a Bayesian hyperparameter tuner. Moreover, this study applies gradient-weighted class activation mapping (Grad-CAM) and Grad-CAM++ to generate heatmaps to explain the classification models. These visual heatmaps elucidate the contribution of each input region to the classification outcome. The CNN-based model approach achieved the highest accuracy at 98.37% in the Xception model with a sensitivity and specificity of 95.92% and 99.01%, respectively. The ViT-based model approach achieved high values for accuracy, sensitivity, and specificity, such as 92.79%, 91.09%, and 93.54%, respectively. Furthermore, the performance of the model was assessed through intersection over union (IOU) and other qualitative evaluations. Finally, we developed the proposed model as a web application that can be used as a support tool for medical practitioners in real-time. The system usability study score of 86.87% is reported, which shows the usefulness of the proposed solution. Full article
Show Figures

Figure 1

26 pages, 352 KiB  
Review
Combining Machine Learning and Edge Computing: Opportunities, Challenges, Platforms, Frameworks, and Use Cases
by Piotr Grzesik and Dariusz Mrozek
Electronics 2024, 13(3), 640; https://doi.org/10.3390/electronics13030640 - 3 Feb 2024
Cited by 1 | Viewed by 2927
Abstract
In recent years, we have been observing the rapid growth and adoption of IoT-based systems, enhancing multiple areas of our lives. Concurrently, the utilization of machine learning techniques has surged, often for similar use cases as those seen in IoT systems. In this [...] Read more.
In recent years, we have been observing the rapid growth and adoption of IoT-based systems, enhancing multiple areas of our lives. Concurrently, the utilization of machine learning techniques has surged, often for similar use cases as those seen in IoT systems. In this survey, we aim to focus on the combination of machine learning and the edge computing paradigm. The presented research commences with the topic of edge computing, its benefits, such as reduced data transmission, improved scalability, and reduced latency, as well as the challenges associated with this computing paradigm, like energy consumption, constrained devices, security, and device fleet management. It then presents the motivations behind the combination of machine learning and edge computing, such as the availability of more powerful edge devices, improving data privacy, reducing latency, or lowering reliance on centralized services. Then, it describes several edge computing platforms, with a focus on their capability to enable edge intelligence workflows. It also reviews the currently available edge intelligence frameworks and libraries, such as TensorFlow Lite or PyTorch Mobile. Afterward, the paper focuses on the existing use cases for edge intelligence in areas like industrial applications, healthcare applications, smart cities, environmental monitoring, or autonomous vehicles. Full article
(This article belongs to the Special Issue Towards Efficient and Reliable AI at the Edge)
Show Figures

Figure 1

17 pages, 2087 KiB  
Article
Multi-Channel Graph Convolutional Networks for Graphs with Inconsistent Structures and Features
by Xinglong Chang, Jianrong Wang, Rui Wang, Tao Wang, Yingkui Wang and Weihao Li
Electronics 2024, 13(3), 607; https://doi.org/10.3390/electronics13030607 - 1 Feb 2024
Viewed by 1154
Abstract
Graph convolutional networks (GCNs) have attracted increasing attention in various fields due to their significant capacity to process graph-structured data. Typically, the GCN model and its variants heavily rely on the transmission of node features across the graph structure, which implicitly assumes that [...] Read more.
Graph convolutional networks (GCNs) have attracted increasing attention in various fields due to their significant capacity to process graph-structured data. Typically, the GCN model and its variants heavily rely on the transmission of node features across the graph structure, which implicitly assumes that the graph structure and node features are consistent, i.e., they carry related information. However, in many real-world networks, node features may unexpectedly mismatch with the structural information. Existing GCNs fail to generalize to inconsistent scenarios and are even outperformed by models that ignore the graph structure or node features. To address this problem, we investigate how to extract representations from both the graph structure and node features. Consequently, we propose the multi-channel graph convolutional network (MCGCN) for graphs with inconsistent structures and features. Specifically, the MCGCN encodes the graph structure and node features using two specific convolution channels to extract two separate specific representations. Additionally, two joint convolution channels are constructed to extract the common information shared by the graph structure and node features. Finally, an attention mechanism is utilized to adaptively learn the importance weights of these channels under the guidance of the node classification task. In this way, our model can handle both consistent and inconsistent scenarios. Extensive experiments on both synthetic and real-world datasets for node classification and recommendation tasks show that our methods, MCGCN-A and MCGCN-I, achieve the best performance on seven out of eight datasets and the second-best performance on the remaining dataset. For simpler graph structures or tasks where the overhead of multiple convolution channels is not justified, traditional single-channel GCN models might be more efficient. Full article
Show Figures

Figure 1

21 pages, 7312 KiB  
Article
Cyber-Resilient Converter Control System for Doubly Fed Induction Generator-Based Wind Turbine Generators
by Nathan Farrar and Mohd. Hasan Ali
Electronics 2024, 13(3), 492; https://doi.org/10.3390/electronics13030492 - 24 Jan 2024
Cited by 1 | Viewed by 955
Abstract
As wind turbine generator systems become more common in the modern power grid, the question of how to adequately protect them from cyber criminals has become a major theme in the development of new control systems. As such, artificial intelligence (AI) and machine [...] Read more.
As wind turbine generator systems become more common in the modern power grid, the question of how to adequately protect them from cyber criminals has become a major theme in the development of new control systems. As such, artificial intelligence (AI) and machine learning (ML) algorithms have become major contributors to preventing, detecting, and mitigating cyber-attacks in the power system. In their current state, wind turbine generator systems are woefully unprepared for a coordinated and sophisticated cyber attack. With the implementation of the internet-of-things (IoT) devices in the power control network, cyber risks have increased exponentially. The literature shows the impact analysis and exploring detection techniques for cyber attacks on the wind turbine generator systems; however, almost no work on the mitigation of the adverse effects of cyber attacks on the wind turbine control systems has been reported. To overcome these limitations, this paper proposes implementing an AI-based converter controller, i.e., a multi-agent deep deterministic policy gradient (DDPG) method that can mitigate any adverse effects that communication delays or bad data could have on a grid-connected doubly fed induction generator (DFIG)-based wind turbine generator or wind farm. The performance of the proposed DDPG controller has been compared with that of a variable proportional–integral (VPI) control-based mitigation method. The proposed technique has been simulated and validated utilizing the MATLAB/Simulink software, version R2023A, to demonstrate the effectiveness of the proposed method. Also, the performance of the proposed DDPG method is better than that of the VPI method in mitigating the adverse impacts of cyber attacks on wind generator systems, which is validated by the plots and the root mean square error table found in the results section. Full article
(This article belongs to the Special Issue Advances in Renewable Energy and Electricity Generation)
Show Figures

Figure 1

17 pages, 6140 KiB  
Article
Predictive Maintenance of Machinery with Rotating Parts Using Convolutional Neural Networks
by Stamatis Apeiranthitis, Paraskevi Zacharia, Avraam Chatzopoulos and Michail Papoutsidakis
Electronics 2024, 13(2), 460; https://doi.org/10.3390/electronics13020460 - 22 Jan 2024
Cited by 1 | Viewed by 1690
Abstract
All kinds of vessels consist of dozens of complex machineries with rotating parts and electric motors that operate continuously in harsh environments with excess temperature, humidity, vibration, fatigue, and load. A breakdown or malfunction in one of these machineries can significantly impact a [...] Read more.
All kinds of vessels consist of dozens of complex machineries with rotating parts and electric motors that operate continuously in harsh environments with excess temperature, humidity, vibration, fatigue, and load. A breakdown or malfunction in one of these machineries can significantly impact a vessel’s operation and safety and, consequently, the safety of the crew and the environment. To maintain operational efficiency and seaworthiness, the shipping industry invests substantial resources in preventive maintenance and repairs. This study presents the economic and technical benefits of predictive maintenance over traditional preventive maintenance and repair by replacement approaches in the maritime domain. By leveraging modern technology and artificial intelligence, we can analyze the operating conditions of machinery by obtaining measurements either from sensors permanently installed on the machinery or by utilizing portable measuring instruments. This facilitates the early identification of potential damage, thereby enabling efficient strategizing for future maintenance and repair endeavors. In this paper, we propose and develop a convolutional neural network that is fed with raw vibration measurements acquired in a laboratory environment from the ball bearings of a motor. Then, we investigate whether the proposed network can accurately detect the functional state of ball bearings and categorize any possible failures present, contributing to improved maintenance practices in the shipping industry. Full article
(This article belongs to the Special Issue Intelligent Manufacturing Systems and Applications in Industry 4.0)
Show Figures

Figure 1

14 pages, 8860 KiB  
Article
An Effective Spherical NF/FF Transformation Suitable for Characterising an Antenna under Test in Presence of an Infinite Perfectly Conducting Ground Plane
by Flaminio Ferrara, Claudio Gennarelli, Rocco Guerriero and Giovanni Riccio
Electronics 2024, 13(2), 397; https://doi.org/10.3390/electronics13020397 - 18 Jan 2024
Viewed by 640
Abstract
An effective near-field to far-field transformation using a reduced number of near-field measurements collected via a spherical scan over the upper hemisphere, due to the presence of a flat metallic ground, is devised in this paper. Such a transformation relies on the non-redundant [...] Read more.
An effective near-field to far-field transformation using a reduced number of near-field measurements collected via a spherical scan over the upper hemisphere, due to the presence of a flat metallic ground, is devised in this paper. Such a transformation relies on the non-redundant sampling representations of electromagnetic fields and exploits the image principle to properly account for the metallic ground, supposed to be of infinite extent and realised by perfectly conducting material. The sampling representation of the probe voltage over the upper hemisphere is developed by modelling the antenna under test and its image by a very adaptable convex surface, which is able to fit as much as possible the geometry of any kind of antenna, thus minimising the volumetric redundancy and, accordingly, the number of required samples as well as the measurement time. Then, the use of a two-dimensional optimal sampling interpolation algorithm allows the reconstruction of the voltage value at each sampling point of the spherical grid required by the classical near-field-to-far-field transformation developed by Hansen. Numerical examples proving the effectiveness of the developed sampling representation and related near-field-to-far-field transformation techniques are reported. Full article
(This article belongs to the Special Issue Feature Papers in Microwave and Wireless Communications Section)
Show Figures

Figure 1

18 pages, 2413 KiB  
Article
A Federated Learning-Based Resource Allocation Scheme for Relaying-Assisted Communications in Multicellular Next Generation Network Topologies
by Ioannis A. Bartsiokas, Panagiotis K. Gkonis, Dimitra I. Kaklamani and Iakovos S. Venieris
Electronics 2024, 13(2), 390; https://doi.org/10.3390/electronics13020390 - 17 Jan 2024
Cited by 1 | Viewed by 873
Abstract
Growing and diverse user needs, along with the need for continuous access with minimal delay in densely populated machine-type networks, have led to a significant overhaul of modern mobile communication systems. Within this realm, the integration of advanced physical layer techniques such as [...] Read more.
Growing and diverse user needs, along with the need for continuous access with minimal delay in densely populated machine-type networks, have led to a significant overhaul of modern mobile communication systems. Within this realm, the integration of advanced physical layer techniques such as relaying-assisted transmission in beyond fifth-generation (B5G) networks aims to not only enhance network performance but also extend coverage across multicellular orientations. However, in cellular environments, the increased interference levels and the complex channel representations introduce a notable rise in the computational complexity associated with radio resource management (RRM) tasks. Machine and deep learning (ML/DL) have been proposed as an efficient way to support the enhanced user demands in densely populated environments since ML/DL models can relax the traffic load that is associated with RRM tasks. There is, however, in these solutions the need for distributed execution of training tasks to accelerate the decision-making process in RRM tasks. For this purpose, federated learning (FL) schemes are considered a promising field of research for next-generation (NG) networks’ RRM. This paper proposes an FL approach to tackle the joint relay node (RN) selection and resource allocation problem subject to power management constraints when in B5G networks. The optimization objective of this approach is to jointly elevate energy (EE) and spectral efficiency (SE) levels. The performance of the proposed approach is evaluated for various relaying-assisted transmission topologies and through comparison with other state-of-the-art ones (both ML and non-ML). In particular, the total system energy efficiency (EE) and spectral efficiency (SE) can be improved by up to approximately 10–20% compared to a state-of-the-art centralized ML scheme. Moreover, achieved accuracy can be improved by up to 10% compared to state-of-the-art non-ML solutions, while training time is reduced by approximately 50%. Full article
(This article belongs to the Special Issue Feature Papers in Microwave and Wireless Communications Section)
Show Figures

Figure 1

15 pages, 5731 KiB  
Article
Design of a Ka-Band Heterogeneous Integrated T/R Module of Phased Array Antenna
by Qinghua Zeng, Zhengtian Chen, Mengyun He, Song Wang, Xiao Liu and Haitao Xu
Electronics 2024, 13(1), 204; https://doi.org/10.3390/electronics13010204 - 2 Jan 2024
Cited by 2 | Viewed by 1093
Abstract
The central element of a phased array antenna that performs beam electrical scanning, as well as signal transmission and reception, is the transceiver (T/R) module. Higher standards have been set for the integration, volume, power consumption, stability, and environmental adaptability of T/R modules [...] Read more.
The central element of a phased array antenna that performs beam electrical scanning, as well as signal transmission and reception, is the transceiver (T/R) module. Higher standards have been set for the integration, volume, power consumption, stability, and environmental adaptability of T/R modules due to the increased operating frequency of phased array antennas, the variability of application platforms, and the diversified development of system functions. Device-based multichannel T/R modules are the key to realizing low-profile Ka-band phased array antenna microsystem architecture. The design and implementation of a low-profile, high-performance, and highly integrated Ka-band phased array antenna T/R module are examined in this paper. Additionally, a dependable Ka-band four-channel T/R module based on Si/GaAs/Low Temperature Co-fired Ceramic (LTCC), applying multi-material heterogeneous integration architecture, is proposed and fabricated. The chip architecture, transceiver link, LTCC substrates, interconnect interface, and packaging are all taken into consideration when designing the T/R module. When compared to a standard phased array antenna, the module’s profile shrunk from 40 mm to 8 mm, and its overall dimensions are only 10.8 mm × 10 mm × 3 mm. It weighs 1 g, and with the same specs, the single channel volume was reduced by 95%. The T/R module has an output power of ≥26 dBm for single-channel transmission, an efficiency of ≥25%, and a noise factor of ≤4.4 dB. When compared to T/R modules based on System-on-Chip (SOC) devices, the RF performance has significantly improved, as seen by an increase in single channel output power and a decrease in the receiving noise factor. This work lays a foundation for the devitalization and engineering application of T/R modules in highly reliable application scenarios. Full article
Show Figures

Figure 1

15 pages, 2049 KiB  
Article
A Multimodal Late Fusion Framework for Physiological Sensor and Audio-Signal-Based Stress Detection: An Experimental Study and Public Dataset
by Vasileios-Rafail Xefteris, Monica Dominguez, Jens Grivolla, Athina Tsanousa, Francesco Zaffanela, Martina Monego, Spyridon Symeonidis, Sotiris Diplaris, Leo Wanner, Stefanos Vrochidis and Ioannis Kompatsiaris
Electronics 2023, 12(23), 4871; https://doi.org/10.3390/electronics12234871 - 2 Dec 2023
Cited by 1 | Viewed by 1737
Abstract
Stress can be considered a mental/physiological reaction in conditions of high discomfort and challenging situations. The levels of stress can be reflected in both the physiological responses and speech signals of a person. Therefore the study of the fusion of the two modalities [...] Read more.
Stress can be considered a mental/physiological reaction in conditions of high discomfort and challenging situations. The levels of stress can be reflected in both the physiological responses and speech signals of a person. Therefore the study of the fusion of the two modalities is of great interest. For this cause, public datasets are necessary so that the different proposed solutions can be comparable. In this work, a publicly available multimodal dataset for stress detection is introduced, including physiological signals and speech cues data. The physiological signals include electrocardiograph (ECG), respiration (RSP), and inertial measurement unit (IMU) sensors equipped in a smart vest. A data collection protocol was introduced to receive physiological and audio data based on alterations between well-known stressors and relaxation moments. Five subjects participated in the data collection, where both their physiological and audio signals were recorded by utilizing the developed smart vest and audio recording application. In addition, an analysis of the data and a decision-level fusion scheme is proposed. The analysis of physiological signals includes a massive feature extraction along with various fusion and feature selection methods. The audio analysis comprises a state-of-the-art feature extraction fed to a classifier to predict stress levels. Results from the analysis of audio and physiological signals are fused at a decision level for the final stress level detection, utilizing a machine learning algorithm. The whole framework was also tested in a real-life pilot scenario of disaster management, where users were acting as first responders while their stress was monitored in real time. Full article
(This article belongs to the Special Issue Future Trends of Artificial Intelligence (AI) and Big Data)
Show Figures

Figure 1

14 pages, 1680 KiB  
Article
AI to Train AI: Using ChatGPT to Improve the Accuracy of a Therapeutic Dialogue System
by Karolina Gabor-Siatkowska, Marcin Sowański, Rafał Rzatkiewicz, Izabela Stefaniak, Marek Kozłowski and Artur Janicki
Electronics 2023, 12(22), 4694; https://doi.org/10.3390/electronics12224694 - 18 Nov 2023
Cited by 1 | Viewed by 1898
Abstract
In this work, we present the use of one artificial intelligence (AI) application (ChatGPT) to train another AI-based application. As the latter one, we show a dialogue system named Terabot, which was used in the therapy of psychiatric patients. Our study was motivated [...] Read more.
In this work, we present the use of one artificial intelligence (AI) application (ChatGPT) to train another AI-based application. As the latter one, we show a dialogue system named Terabot, which was used in the therapy of psychiatric patients. Our study was motivated by the fact that for such a domain-specific system, it was difficult to acquire large real-life data samples to increase the training database: this would require recruiting more patients, which is both time-consuming and costly. To address this gap, we have employed a neural large language model: ChatGPT version 3.5, to generate data solely for training our dialogue system. During initial experiments, we identified intents that were most often misrecognized. Next, we fed ChatGPT with a series of prompts, which triggered the language model to generate numerous additional training entries, e.g., alternatives to the phrases that had been collected during initial experiments with healthy users. This way, we have enlarged the training dataset by 112%. In our case study, for testing, we used 2802 speech recordings originating from 32 psychiatric patients. As an evaluation metric, we used the accuracy of intent recognition. The speech samples were converted into text using automatic speech recognition (ASR). The analysis showed that the patients’ speech challenged the ASR module significantly, resulting in deteriorated speech recognition and, consequently, low accuracy of intent recognition. However, thanks to the augmentation of the training data with ChatGPT-generated data, the intent recognition accuracy increased by 13% relatively, reaching 86% in total. We also emulated the case of an error-free ASR and showed the impact of ASR misrecognitions on the intent recognition accuracy. Our study showcased the potential of using generative language models to develop other AI-based tools, such as dialogue systems. Full article
(This article belongs to the Special Issue Application of Machine Learning and Intelligent Systems)
Show Figures

Figure 1

33 pages, 1227 KiB  
Review
A Systematic Literature Review on Artificial Intelligence and Explainable Artificial Intelligence for Visual Quality Assurance in Manufacturing
by Rudolf Hoffmann and Christoph Reich
Electronics 2023, 12(22), 4572; https://doi.org/10.3390/electronics12224572 - 8 Nov 2023
Viewed by 3990
Abstract
Quality assurance (QA) plays a crucial role in manufacturing to ensure that products meet their specifications. However, manual QA processes are costly and time-consuming, thereby making artificial intelligence (AI) an attractive solution for automation and expert support. In particular, convolutional neural networks (CNNs) [...] Read more.
Quality assurance (QA) plays a crucial role in manufacturing to ensure that products meet their specifications. However, manual QA processes are costly and time-consuming, thereby making artificial intelligence (AI) an attractive solution for automation and expert support. In particular, convolutional neural networks (CNNs) have gained a lot of interest in visual inspection. Next to AI methods, the explainable artificial intelligence (XAI) systems, which achieve transparency and interpretability by providing insights into the decision-making process of the AI, are interesting methods for achieveing quality inspections in manufacturing processes. In this study, we conducted a systematic literature review (SLR) to explore AI and XAI approaches for visual QA (VQA) in manufacturing. Our objective was to assess the current state of the art and identify research gaps in this context. Our findings revealed that AI-based systems predominantly focused on visual quality control (VQC) for defect detection. Research addressing VQA practices, like process optimization, predictive maintenance, or root cause analysis, are more rare. Least often cited are papers that utilize XAI methods. In conclusion, this survey emphasizes the importance and potential of AI and XAI in VQA across various industries. By integrating XAI, organizations can enhance model transparency, interpretability, and trust in AI systems. Overall, leveraging AI and XAI improves VQA practices and decision-making in industries. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 3rd Edition)
Show Figures

Figure 1

16 pages, 5443 KiB  
Article
Design of High-Gain and Low-Mutual-Coupling Multiple-Input–Multiple-Output Antennas Based on PRS for 28 GHz Applications
by Jinkyu Jung, Wahaj Abbas Awan, Domin Choi, Jaemin Lee, Niamat Hussain and Nam Kim
Electronics 2023, 12(20), 4286; https://doi.org/10.3390/electronics12204286 - 16 Oct 2023
Cited by 7 | Viewed by 1675
Abstract
In this paper, a high-gain and low-mutual-coupling four-port Multiple Input Multiple Output (MIMO) antenna based on a Partially Reflective Surface (PRS) for 28 GHz applications is proposed. The antenna radiator is a circular-shaped patch with a circular slot and a pair of vias [...] Read more.
In this paper, a high-gain and low-mutual-coupling four-port Multiple Input Multiple Output (MIMO) antenna based on a Partially Reflective Surface (PRS) for 28 GHz applications is proposed. The antenna radiator is a circular-shaped patch with a circular slot and a pair of vias to secure a wide bandwidth ranging from 24.29 GHz to 28.45 GHz (15.77%). The targeted band has been allocated for several countries such as Korea, Europe, the United States, China, and Japan. The optimized antenna offers a peak gain of 8.77 dBi at 24.29 GHz with a gain of 6.78 dBi. A novel PRS is designed and loaded on the antenna for broadband and high-gain characteristics. With the PRS, the antenna offers a wide bandwidth from 23.67 GHz to 29 GHz (21%), and the gain is improved up to 11.4 dBi, showing an overall increase of about 3 dBi. A 2 × 2 MIMO system is designed using the single-element antenna, which offers a bandwidth of 23.5 to 29 GHz (20%), and a maximum gain of 11.4 dBi. The MIMO antenna also exhibits a low mutual coupling of −35 dB along with a low Envelope Correlation Coefficient and Channel Capacity Loss, making it a suitable candidate for future compact-sized mmWave MIMO systems. Full article
Show Figures

Figure 1

14 pages, 840 KiB  
Article
Reconfigurable Intelligent Surface-Assisted Millimeter Wave Networks: Cell Association and Coverage Analysis
by Donglai Zhao, Gang Wang, Jinlong Wang and Zhiquan Zhou
Electronics 2023, 12(20), 4270; https://doi.org/10.3390/electronics12204270 - 16 Oct 2023
Cited by 2 | Viewed by 1112
Abstract
Reconfigurable intelligent surface (RIS) is emerging as a promising technology to achieve coverage enhancement. This paper develops a tractable analytical framework based on stochastic geometry for performance analysis of RIS-assisted millimeter wave networks. Based on the framework, a two-step cell association criterion is [...] Read more.
Reconfigurable intelligent surface (RIS) is emerging as a promising technology to achieve coverage enhancement. This paper develops a tractable analytical framework based on stochastic geometry for performance analysis of RIS-assisted millimeter wave networks. Based on the framework, a two-step cell association criterion is proposed, and the analytical expressions of the user association probability and the coverage probability in general scenarios are derived. In addition, the closed-form expressions of the two performance metrics in special cases are also provided. The simulation results verify the accuracy of the theoretically derived analytical expressions, and reveal the superiority of deploying RISs in millimeter wave networks and the effectiveness of the proposed cell association scheme to improve coverage. Furthermore, the effects of the RIS parameters and the BS density on coverage performance are also investigated. Full article
Show Figures

Figure 1

6 pages, 359 KiB  
Editorial
Wearable Electronic Systems Based on Smart Wireless Sensors for Multimodal Physiological Monitoring in Health Applications: Challenges, Opportunities, and Future Directions
by Cristiano De Marchis, Giovanni Crupi, Nicola Donato and Sergio Baldari
Electronics 2023, 12(20), 4284; https://doi.org/10.3390/electronics12204284 - 16 Oct 2023
Viewed by 1142
Abstract
Driven by the fast-expanding market, wearable technologies have rapidly evolved [...] Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

31 pages, 5489 KiB  
Article
Explicit Representation of Mechanical Functions for Maintenance Decision Support
by Mengchu Song, Ilmar F. Santos, Xinxin Zhang, Jing Wu and Morten Lind
Electronics 2023, 12(20), 4267; https://doi.org/10.3390/electronics12204267 - 15 Oct 2023
Cited by 1 | Viewed by 1228
Abstract
Artificial intelligence (AI) has been increasingly applied to condition-based maintenance (CBM), a knowledge-based method taking advantage of human expertise and other system knowledge that can serve as an alternative in cases in which machine learning is inapplicable due to a lack of training [...] Read more.
Artificial intelligence (AI) has been increasingly applied to condition-based maintenance (CBM), a knowledge-based method taking advantage of human expertise and other system knowledge that can serve as an alternative in cases in which machine learning is inapplicable due to a lack of training data. Functional information is seen as the most fundamental and important knowledge in maintenance decision making. This paper first proposes a mechanical functional modeling approach based on a functional modeling and reasoning methodology called multilevel flow modeling (MFM). The approach actually bridges the modeling gap between the mechanical level and the process level, which potentially extends the existing capability of MFM in rule-based diagnostics and prognostics from operation support to maintenance support. Based on this extension, a framework of optimized CBM is proposed, which can be used to diagnose potential mechanical failures from condition monitoring data and predict their future impacts in a qualitative way. More importantly, the framework uses MFM-based reliability-centered maintenance (RCM) to determine the importance of a detected potential failure, which can ensure the cost-effectiveness of CBM by adapting the maintenance requirements to specific operational contexts. This ability cannot be offered by existing CBM methods. An application to a mechanical test apparatus and hypothetical coupling with a process plant are used to demonstrate the proposed framework. Full article
Show Figures

Figure 1

26 pages, 2948 KiB  
Article
Real-Time AI-Driven Fall Detection Method for Occupational Health and Safety
by Anastasiya Danilenka, Piotr Sowiński, Kajetan Rachwał, Karolina Bogacka, Anna Dąbrowska, Monika Kobus, Krzysztof Baszczyński, Małgorzata Okrasa, Witold Olczak, Piotr Dymarski, Ignacio Lacalle, Maria Ganzha and Marcin Paprzycki
Electronics 2023, 12(20), 4257; https://doi.org/10.3390/electronics12204257 - 14 Oct 2023
Cited by 3 | Viewed by 2050
Abstract
Fall accidents in industrial and construction environments require an immediate reaction, to provide first aid. Shortening the time between the fall and the relevant personnel being notified can significantly improve the safety and health of workers. Therefore, in this work, an IoT system [...] Read more.
Fall accidents in industrial and construction environments require an immediate reaction, to provide first aid. Shortening the time between the fall and the relevant personnel being notified can significantly improve the safety and health of workers. Therefore, in this work, an IoT system for real-time fall detection is proposed, using the ASSIST-IoT reference architecture. Empowered with a machine learning model, the system can detect fall accidents and swiftly notify the occupational health and safety manager. To train the model, a novel multimodal fall detection dataset was collected from ten human participants and an anthropomorphic dummy, covering multiple types of fall, including falls from a height. The dataset includes absolute location and acceleration measurements from several IoT devices. Furthermore, a lightweight long short-term memory model is proposed for fall detection, capable of operating in an IoT environment with limited network bandwidth and hardware resources. The accuracy and F1-score of the model on the collected dataset were shown to exceed 0.95 and 0.9, respectively. The collected multimodal dataset was published under an open license, to facilitate future research on fall detection methods in occupational health and safety. Full article
(This article belongs to the Special Issue Artificial Intelligence Empowered Internet of Things)
Show Figures

Figure 1

12 pages, 7248 KiB  
Article
Optimal Camera Placement to Generate 3D Reconstruction of a Mixed-Reality Human in Real Environments
by Juhwan Kim and Dongsik Jo
Electronics 2023, 12(20), 4244; https://doi.org/10.3390/electronics12204244 - 13 Oct 2023
Viewed by 1608
Abstract
Virtual reality and augmented reality are increasingly used for immersive engagement by utilizing information from real environments. In particular, three-dimensional model data, which is the basis for creating virtual places, can be manually developed using commercial modeling toolkits, but with the advancement of [...] Read more.
Virtual reality and augmented reality are increasingly used for immersive engagement by utilizing information from real environments. In particular, three-dimensional model data, which is the basis for creating virtual places, can be manually developed using commercial modeling toolkits, but with the advancement of sensing technology, computer vision technology can also be used to create virtual environments. Specifically, a 3D reconstruction approach can generate a single 3D model from image information obtained from various scenes in real environments using several cameras (multi-cameras). The goal is to generate a 3D model with excellent precision. However, the rules for choosing the optimal number of cameras and settings to capture information from in real environments (e.g., actual people) employing several cameras in unconventional positions are lacking. In this study, we propose an optimal camera placement strategy for acquiring high-quality 3D data using an irregular camera placement, essential for organizing image information while acquiring human data in a three-dimensional real space, using multiple irregular cameras in real environments. Our results show that installation costs can be lowered by arranging a minimum number of multi-camera cameras in an arbitrary space, and automated virtual human manufacturing with high accuracy can be conducted using optimal irregular camera location. Full article
(This article belongs to the Special Issue Perception and Interaction in Mixed, Augmented, and Virtual Reality)
Show Figures

Figure 1

16 pages, 4373 KiB  
Article
Computer Vision Algorithms for 3D Object Recognition and Orientation: A Bibliometric Study
by Youssef Yahia, Júlio Castro Lopes and Rui Pedro Lopes
Electronics 2023, 12(20), 4218; https://doi.org/10.3390/electronics12204218 - 12 Oct 2023
Cited by 1 | Viewed by 1475
Abstract
This paper consists of a bibliometric study that covers the topic of 3D object detection from 2022 until the present day. It employs various analysis approaches that shed light on the leading authors, affiliations, and countries within this research domain alongside the main [...] Read more.
This paper consists of a bibliometric study that covers the topic of 3D object detection from 2022 until the present day. It employs various analysis approaches that shed light on the leading authors, affiliations, and countries within this research domain alongside the main themes of interest related to it. The findings revealed that China is the leading country in this domain given the fact that it is responsible for most of the scientific literature as well as being a host for the most productive universities and authors in terms of the number of publications. China is also responsible for initiating a significant number of collaborations with various nations around the world. The most basic theme related to this field is deep learning, along with autonomous driving, point cloud, robotics, and LiDAR. The work also includes an in-depth review that underlines some of the latest frameworks that took on various challenges regarding this topic, the improvement of object detection from point clouds, and training end-to-end fusion methods using both camera and LiDAR sensors, to name a few. Full article
(This article belongs to the Special Issue Applications of Deep Learning Techniques)
Show Figures

Figure 1

15 pages, 5638 KiB  
Article
Underwater Biomimetic Covert Acoustic Communications Mimicking Multiple Dolphin Whistles
by Yongcheol Kim, Hojun Lee, Seunghwan Seol, Bonggyu Park and Jaehak Chung
Electronics 2023, 12(19), 3999; https://doi.org/10.3390/electronics12193999 - 22 Sep 2023
Cited by 1 | Viewed by 796
Abstract
This paper presents an underwater biomimetic covert acoustic communication system that achieves high covertness and a high data rate by mimicking dolphin group whistles. The proposed method uses combined time–frequency shift keying modulation with continuous varying carrier frequency modulation, which mitigates the interference [...] Read more.
This paper presents an underwater biomimetic covert acoustic communication system that achieves high covertness and a high data rate by mimicking dolphin group whistles. The proposed method uses combined time–frequency shift keying modulation with continuous varying carrier frequency modulation, which mitigates the interference between two overlapping multiple whistles while maintaining a high data rate. The data rate and bit error rate (BER) performance of the proposed method were compared with conventional underwater covert communication through an additive white Gaussian noise channel, a modeled underwater channel, and practical ocean experiments. For the covertness test, the similarity of the proposed multiple whistles was compared with the real dolphin group whistles using the mean opinion score test. As a result, the proposed method demonstrated a higher data rate, better BER performance, and large covertness to the real dolphin group whistles. Full article
(This article belongs to the Special Issue New Advances in Underwater Communication Systems)
Show Figures

Figure 1

18 pages, 12864 KiB  
Article
A CMA-Based Electronically Reconfigurable Dual-Mode and Dual-Band Antenna
by Nicholas E. Russo, Constantinos L. Zekios and Stavros V. Georgakopoulos
Electronics 2023, 12(18), 3915; https://doi.org/10.3390/electronics12183915 - 17 Sep 2023
Viewed by 852
Abstract
In this work, an electronically reconfigurable dual-band dual-mode microstrip ring antenna with high isolation is proposed. Using characteristic mode analysis (CMA), the physical characteristics of the ring antenna are revealed, and two modes are appropriately chosen for operation in two sub-6 GHz “legacy” [...] Read more.
In this work, an electronically reconfigurable dual-band dual-mode microstrip ring antenna with high isolation is proposed. Using characteristic mode analysis (CMA), the physical characteristics of the ring antenna are revealed, and two modes are appropriately chosen for operation in two sub-6 GHz “legacy” bands. Due to the inherent orthogonality of the characteristic modes, measured isolation larger than 37 dB was achieved in both bands without requiring complicated decoupling approaches. An integrated electronically reconfigurable matching network (comprising PIN diodes and varactors) was designed to switch between the two modes of operation. The simulated and measured results were in excellent agreement, showing a peak gain of 4.7 dB for both modes and radiation efficiency values of 44.3% and 64%, respectively. Using CMA to gain physical insights into the radiative orthogonal modes of under-researched and non-conventional antennas (e.g., antennas of arbitrary shapes) opens the door to developing highly compact radiators, which enable next-generation communication systems. Full article
(This article belongs to the Special Issue Recent Advances in Antenna Arrays and Millimeter-Wave Components)
Show Figures

Figure 1

15 pages, 4402 KiB  
Article
DSW-YOLOv8n: A New Underwater Target Detection Algorithm Based on Improved YOLOv8n
by Qiang Liu, Wei Huang, Xiaoqiu Duan, Jianghao Wei, Tao Hu, Jie Yu and Jiahuan Huang
Electronics 2023, 12(18), 3892; https://doi.org/10.3390/electronics12183892 - 15 Sep 2023
Cited by 6 | Viewed by 2086
Abstract
Underwater target detection is widely used in various applications such as underwater search and rescue, underwater environment monitoring, and marine resource surveying. However, the complex underwater environment, including factors such as light changes and background noise, poses a significant challenge to target detection. [...] Read more.
Underwater target detection is widely used in various applications such as underwater search and rescue, underwater environment monitoring, and marine resource surveying. However, the complex underwater environment, including factors such as light changes and background noise, poses a significant challenge to target detection. We propose an improved underwater target detection algorithm based on YOLOv8n to overcome these problems. Our algorithm focuses on three aspects. Firstly, we replace the original C2f module with Deformable Convnets v2 to enhance the adaptive ability of the target region in the convolution check feature map and extract the target region’s features more accurately. Secondly, we introduce SimAm, a non-parametric attention mechanism, which can deduce and assign three-dimensional attention weights without adding network parameters. Lastly, we optimize the loss function by replacing the CIoU loss function with the Wise-IoU loss function. We named our new algorithm DSW-YOLOv8n, which is an acronym of Deformable Convnets v2, SimAm, and Wise-IoU of the improved YOLOv8n(DSW-YOLOv8n). To conduct our experiments, we created our own dataset of underwater target detection for experimentation. Meanwhile, we also utilized the Pascal VOC dataset to evaluate our approach. The [email protected] and [email protected]:0.95 of the original YOLOv8n algorithm on underwater target detection were 88.6% and 51.8%, respectively, and the DSW-YOLOv8n algorithm [email protected] and [email protected]:0.95 can reach 91.8% and 55.9%. The original YOLOv8n algorithm was 62.2% and 45.9% [email protected] and [email protected]:0.95 on the Pascal VOC dataset, respectively. The DSW-YOLOv8n algorithm [email protected] and [email protected]:0.95 were 65.7% and 48.3%, respectively. The number of parameters of the model is reduced by about 6%. The above experimental results prove the effectiveness of our method. Full article
(This article belongs to the Special Issue Advances and Applications of Computer Vision in Electronics)
Show Figures

Figure 1

21 pages, 7872 KiB  
Article
YOLO-Drone: An Optimized YOLOv8 Network for Tiny UAV Object Detection
by Xianxu Zhai, Zhihua Huang, Tao Li, Hanzheng Liu and Siyuan Wang
Electronics 2023, 12(17), 3664; https://doi.org/10.3390/electronics12173664 - 30 Aug 2023
Cited by 19 | Viewed by 11775
Abstract
With the widespread use of UAVs in commercial and industrial applications, UAV detection is receiving increasing attention in areas such as public safety. As a result, object detection techniques for UAVs are also developing rapidly. However, the small size of drones, complex airspace [...] Read more.
With the widespread use of UAVs in commercial and industrial applications, UAV detection is receiving increasing attention in areas such as public safety. As a result, object detection techniques for UAVs are also developing rapidly. However, the small size of drones, complex airspace backgrounds, and changing light conditions still pose significant challenges for research in this area. Based on the above problems, this paper proposes a tiny UAV detection method based on the optimized YOLOv8. First, in the detection head component, a high-resolution detection head is added to improve the device’s detection capability for small targets, while the large target detection head and redundant network layers are cut off to effectively reduce the number of network parameters and improve the detection speed of UAV; second, in the feature extraction stage, SPD-Conv is used to extract multi-scale features instead of Conv to reduce the loss of fine-grained information and enhance the model’s feature extraction capability for small targets. Finally, the GAM attention mechanism is introduced in the neck to enhance the model’s fusion of target features and improve the model’s overall performance in detecting UAVs. Relative to the baseline model, our method improves performance by 11.9%, 15.2%, and 9% in terms of P (precision), R (recall), and mAP (mean average precision), respectively. Meanwhile, it reduces the number of parameters and model size by 59.9% and 57.9%, respectively. In addition, our method demonstrates clear advantages in comparison experiments and self-built dataset experiments and is more suitable for engineering deployment and the practical applications of UAV object detection systems. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Deep Learning and Its Applications)
Show Figures

Figure 1

19 pages, 4263 KiB  
Article
Integration of Wearables and Wireless Technologies to Improve the Interaction between Disabled Vulnerable Road Users and Self-Driving Cars
by Antonio Guerrero-Ibañez, Ismael Amezcua-Valdovinos and Juan Contreras-Castillo
Electronics 2023, 12(17), 3587; https://doi.org/10.3390/electronics12173587 - 25 Aug 2023
Cited by 2 | Viewed by 1751
Abstract
The auto industry is accelerating, and self-driving cars are becoming a reality. However, the acceptance of such cars will depend on their social and environmental integration into a road traffic ecosystem comprising vehicles, motorcycles, bicycles, and pedestrians. One of the most vulnerable groups [...] Read more.
The auto industry is accelerating, and self-driving cars are becoming a reality. However, the acceptance of such cars will depend on their social and environmental integration into a road traffic ecosystem comprising vehicles, motorcycles, bicycles, and pedestrians. One of the most vulnerable groups within the road ecosystem is pedestrians. Assistive technology focuses on ensuring functional independence for people with disabilities. However, little effort has been devoted to exploring possible interaction mechanisms between pedestrians with disabilities and self-driving cars. This paper analyzes how self-driving cars and disabled pedestrians should interact in a traffic ecosystem supported by wearable devices for pedestrians to feel safer and more comfortable. We define the concept of an Assistive Self-driving Car (ASC). We describe a set of procedures to identify people with disabilities using an IEEE 802.11p-based device and a group of messages to express the intentions of disabled pedestrians to self-driving cars. This interaction provides disabled pedestrians with increased safety and confidence in performing tasks such as crossing the street. Finally, we discuss strategies for alerting disabled pedestrians to potential hazards within the road ecosystem. Full article
Show Figures

Figure 1

16 pages, 5254 KiB  
Article
A Deep Learning Framework for Adaptive Beamforming in Massive MIMO Millimeter Wave 5G Multicellular Networks
by Spyros Lavdas, Panagiotis K. Gkonis, Efthalia Tsaknaki, Lambros Sarakis, Panagiotis Trakadas and Konstantinos Papadopoulos
Electronics 2023, 12(17), 3555; https://doi.org/10.3390/electronics12173555 - 23 Aug 2023
Cited by 2 | Viewed by 1396
Abstract
The goal of this paper is the performance evaluation of a deep learning approach when deployed in fifth-generation (5G) millimeter wave (mmWave) multicellular networks. To this end, the optimum beamforming configuration is defined by two neural networks (NNs) that are properly trained, according [...] Read more.
The goal of this paper is the performance evaluation of a deep learning approach when deployed in fifth-generation (5G) millimeter wave (mmWave) multicellular networks. To this end, the optimum beamforming configuration is defined by two neural networks (NNs) that are properly trained, according to mean square error (MSE) minimization. The first network has as input the requested spectral efficiency (SE) per active sector, while the second network has the corresponding energy efficiency (EE). Hence, channel and power variations can now be taken into consideration during adaptive beamforming. The performance of the proposed approach is evaluated with the help of a developed system-level simulator via extensive Monte Carlo simulations. According to the presented results, machine learning (ML)-adaptive beamforming can significantly improve EE compared to the standard non-ML framework. Although this improvement comes at the cost of increased blocking probability (BP) and radiating elements (REs) for high data rate services, the corresponding increase ratios are significantly reduced compared to the EE improvement ratio. In particular, considering 21.6 Mbps per active user and ML adaptive beamforming, the EE can reach up to 5.3 Mbps/W, which is significantly improved compared to the non-ML case (0.9 Mbps/W). In this context, BP does not exceed 2.6%, which is slightly worse compared to 1.7% in the standard non-ML case. Moreover, approximately 20% additional REs are required with respect to the non-ML framework. Full article
(This article belongs to the Special Issue Recent Advances in Antenna Arrays and Millimeter-Wave Components)
Show Figures

Figure 1

16 pages, 13408 KiB  
Article
A 220 GHz to 325 GHz Grounded Coplanar Waveguide Based Periodic Leaky-Wave Beam-Steering Antenna in Indium Phosphide Process
by Akanksha Bhutani, Marius Kretschmann, Joel Dittmer, Peng Lu, Andreas Stöhr and Thomas Zwick
Electronics 2023, 12(16), 3482; https://doi.org/10.3390/electronics12163482 - 17 Aug 2023
Cited by 2 | Viewed by 1826
Abstract
This paper presents a novel periodic grounded coplanar waveguide (GCPW) leaky-wave antenna implemented in an Indium Phosphide (InP) process. The antenna is designed to operate in the 220 GHz–325 GHz frequency range, with the goal of integrating it with an InP uni-traveling-carrier photodiode [...] Read more.
This paper presents a novel periodic grounded coplanar waveguide (GCPW) leaky-wave antenna implemented in an Indium Phosphide (InP) process. The antenna is designed to operate in the 220 GHz–325 GHz frequency range, with the goal of integrating it with an InP uni-traveling-carrier photodiode to realize a wireless transmitter module. Future wireless communication systems must deliver a high data rate to multiple users in different locations. Therefore, wireless transmitters need to have a broadband nature, high gain, and beam-steering capability. Leaky-wave antennas offer a simple and cost-effective way to achieve beam-steering by sweeping frequency in the THz range. In this paper, the first periodic GCPW leaky-wave antenna in the 220 GHz–325 GHz frequency range is demonstrated. The antenna design is based on a novel GCPW leaky-wave unit cell (UC) that incorporates mirrored L-slots in the lateral ground planes. These mirrored L-slots effectively mitigate the open stopband phenomenon of a periodic leaky-wave antenna. The leakage rate, phase constant, and Bloch impedance of the novel GCPW leaky-wave UC are analyzed using Floquet’s theory. After optimizing the UC, a periodic GCPW leaky-wave antenna is constructed by cascading 16 UCs. Electromagnetic simulation results of the leaky-wave antenna are compared with an ideal model derived from a single UC. The two design approaches show excellent agreement in terms of their reflection coefficient and beam-steering range. Therefore, the ideal model presented in this paper demonstrates, for the first time, a rapid method for developing periodic leaky-wave antennas. To validate the simulation results, probe-based antenna measurements are conducted, showing close agreement in terms of the reflection coefficient, peak antenna gain, beam-steering angle, and far-field radiation patterns. The periodic GCPW leaky-wave antenna presented in this paper exhibits a high gain of up to 13.5 dBi and a wide beam-steering range from 60° to 35° over the 220 GHz–325 GHz frequency range. Full article
(This article belongs to the Special Issue Advanced Antenna Technologies for B5G and 6G Applications)
Show Figures

Figure 1

15 pages, 2160 KiB  
Article
Safe and Trustful AI for Closed-Loop Control Systems
by Julius Schöning and Hans-Jürgen Pfisterer
Electronics 2023, 12(16), 3489; https://doi.org/10.3390/electronics12163489 - 17 Aug 2023
Cited by 3 | Viewed by 1977
Abstract
In modern times, closed-loop control systems (CLCSs) play a prominent role in a wide application range, from production machinery via automated vehicles to robots. CLCSs actively manipulate the actual values of a process to match predetermined setpoints, typically in real time and with [...] Read more.
In modern times, closed-loop control systems (CLCSs) play a prominent role in a wide application range, from production machinery via automated vehicles to robots. CLCSs actively manipulate the actual values of a process to match predetermined setpoints, typically in real time and with remarkable precision. However, the development, modeling, tuning, and optimization of CLCSs barely exploit the potential of artificial intelligence (AI). This paper explores novel opportunities and research directions in CLCS engineering, presenting potential designs and methodologies incorporating AI. Combining these opportunities and directions makes it evident that employing AI in developing and implementing CLCSs is indeed feasible. Integrating AI into CLCS development or AI directly within CLCSs can lead to a significant improvement in stakeholder confidence. Integrating AI in CLCSs raises the question: How can AI in CLCSs be trusted so that its promising capabilities can be used safely? One does not trust AI in CLCSs due to its unknowable nature caused by its extensive set of parameters that defy complete testing. Consequently, developers working on AI-based CLCSs must be able to rate the impact of the trainable parameters on the system accurately. By following this path, this paper highlights two key aspects as essential research directions towards safe AI-based CLCSs: (I) the identification and elimination of unproductive layers in artificial neural networks (ANNs) for reducing the number of trainable parameters without influencing the overall outcome, and (II) the utilization of the solution space of an ANN to define the safety-critical scenarios of an AI-based CLCS. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence Engineering)
Show Figures

Figure 1

26 pages, 1389 KiB  
Article
MAGNETO and DeepInsight: Extended Image Translation with Semantic Relationships for Classifying Attack Data with Machine Learning Models
by Aeryn Dunmore, Adam Dunning, Julian Jang-Jaccard, Fariza Sabrina and Jin Kwak
Electronics 2023, 12(16), 3463; https://doi.org/10.3390/electronics12163463 - 15 Aug 2023
Cited by 3 | Viewed by 1334
Abstract
The translation of traffic flow data into images for the purposes of classification in machine learning tasks has been extensively explored in recent years. However, the method of translation has a significant impact on the success of such attempts. In 2019, a method [...] Read more.
The translation of traffic flow data into images for the purposes of classification in machine learning tasks has been extensively explored in recent years. However, the method of translation has a significant impact on the success of such attempts. In 2019, a method called DeepInsight was developed to translate genetic information into images. It was then adopted in 2021 for the purpose of translating network traffic into images, allowing the retention of semantic data about the relationships between features, in a model called MAGNETO. In this paper, we explore and extend this research, using the MAGNETO algorithm on three new intrusion detection datasets—CICDDoS2019, 5G-NIDD, and BOT-IoT—and also extend this method into the realm of multiclass classification tasks using first a One versus Rest model, followed by a full multiclass classification task, using multiple new classifiers for comparison against the CNNs implemented by the original MAGNETO model. We have also undertaken comparative experiments on the original MAGNETO datasets, CICIDS17, KDD99, and UNSW-NB15, as well as a comparison for other state-of-the-art models using the NSL-KDD dataset. The results show that the MAGNETO algorithm and the DeepInsight translation method, without the use of data augmentation, offer a significant boost to accuracy when classifying network traffic data. Our research also shows the effectiveness of Decision Tree and Random Forest classifiers on this type of data. Further research into the potential for real-time execution is needed to explore the possibilities for extending this method of translation into real-world scenarios. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

23 pages, 4527 KiB  
Article
Self-Regulated Learning and Active Feedback of MOOC Learners Supported by the Intervention Strategy of a Learning Analytics System
by Ruth Cobos
Electronics 2023, 12(15), 3368; https://doi.org/10.3390/electronics12153368 - 7 Aug 2023
Cited by 1 | Viewed by 1812
Abstract
MOOCs offer great learning opportunities, but they also present several challenges for learners that hinder them from successfully completing MOOCs. To address these challenges, edX-LIMS (System for Learning Intervention and its Monitoring for edX MOOCs) was developed. It is a learning analytics system [...] Read more.
MOOCs offer great learning opportunities, but they also present several challenges for learners that hinder them from successfully completing MOOCs. To address these challenges, edX-LIMS (System for Learning Intervention and its Monitoring for edX MOOCs) was developed. It is a learning analytics system that supports an intervention strategy (based on learners’ interactions with the MOOC) to provide feedback to learners through web-based Learner Dashboards. Additionally, edX-LIMS provides a web-based Instructor Dashboard for instructors to monitor their learners. In this article, an enhanced version of the aforementioned system called edX-LIMS+ is presented. This upgrade introduces new services that enhance both the learners’ and instructors’ dashboards with a particular focus on self-regulated learning. Moreover, the system detects learners’ problems to guide them and assist instructors in better monitoring learners and providing necessary support. The results obtained from the use of this new version (through learners’ interactions and opinions about their dashboards) demonstrate that the feedback provided has been significantly improved, offering more valuable information to learners and enhancing their perception of both the dashboard and the intervention strategy supported by the system. Additionally, the majority of learners agreed with their detected problems, thereby enabling instructors to enhance interventions and support learners’ learning processes. Full article
Show Figures

Figure 1

18 pages, 513 KiB  
Article
Cascading and Ensemble Techniques in Deep Learning
by I. de Zarzà, J. de Curtò, Enrique Hernández-Orallo and Carlos T. Calafate
Electronics 2023, 12(15), 3354; https://doi.org/10.3390/electronics12153354 - 5 Aug 2023
Cited by 4 | Viewed by 3109
Abstract
In this study, we explore the integration of cascading and ensemble techniques in Deep Learning (DL) to improve prediction accuracy on diabetes data. The primary approach involves creating multiple Neural Networks (NNs), each predicting the outcome independently, and then feeding these initial predictions [...] Read more.
In this study, we explore the integration of cascading and ensemble techniques in Deep Learning (DL) to improve prediction accuracy on diabetes data. The primary approach involves creating multiple Neural Networks (NNs), each predicting the outcome independently, and then feeding these initial predictions into another set of NN. Our exploration starts from an initial preliminary study and extends to various ensemble techniques including bagging, stacking, and finally cascading. The cascading ensemble involves training a second layer of models on the predictions of the first. This cascading structure, combined with ensemble voting for the final prediction, aims to exploit the strengths of multiple models while mitigating their individual weaknesses. Our results demonstrate significant improvement in prediction accuracy, providing a compelling case for the potential utility of these techniques in healthcare applications, specifically for prediction of diabetes where we achieve compelling model accuracy of 91.5% on the test set on a particular challenging dataset, where we compare thoroughly against many other methodologies. Full article
(This article belongs to the Special Issue Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

29 pages, 6436 KiB  
Article
Fish Monitoring from Low-Contrast Underwater Images
by Nikos Petrellis, Georgios Keramidas, Christos P. Antonopoulos and Nikolaos Voros
Electronics 2023, 12(15), 3338; https://doi.org/10.3390/electronics12153338 - 4 Aug 2023
Cited by 2 | Viewed by 1681
Abstract
A toolset supporting fish detection, orientation, tracking and especially morphological feature estimation with high speed and accuracy, is presented in this paper. It can be exploited in fish farms to automate everyday procedures including size measurement and optimal harvest time estimation, fish health [...] Read more.
A toolset supporting fish detection, orientation, tracking and especially morphological feature estimation with high speed and accuracy, is presented in this paper. It can be exploited in fish farms to automate everyday procedures including size measurement and optimal harvest time estimation, fish health assessment, quantification of feeding needs, etc. It can also be used in an open sea environment to monitor fish size, behavior and the population of various species. An efficient deep learning technique for fish detection is employed and adapted, while methods for fish tracking are also proposed. The fish orientation is classified in order to apply a shape alignment technique that is based on the Ensemble of Regression Trees machine learning method. Shape alignment allows the estimation of fish dimensions (length, height) and the localization of fish body parts of particular interest such as the eyes and gills. The proposed method can estimate the position of 18 landmarks with an accuracy of about 95% from low-contrast underwater images where the fish can be hardly distinguished from its background. Hardware and software acceleration techniques have been applied at the shape alignment process reducing the frame processing latency to less than 0.5 us on a general purpose computer and less than 16 ms on an embedded platform. As a case study, the developed system has been trained and tested with several Mediterranean fish species in the category of seabream. A large public dataset with low-resolution underwater videos and images has also been developed to test the proposed system under worst case conditions. Full article
Show Figures

Figure 1

18 pages, 8039 KiB  
Article
A Thorough Evaluation of GaN HEMT Degradation under Realistic Power Amplifier Operation
by Gianni Bosi, Antonio Raffo, Valeria Vadalà, Rocco Giofrè, Giovanni Crupi and Giorgio Vannini
Electronics 2023, 12(13), 2939; https://doi.org/10.3390/electronics12132939 - 4 Jul 2023
Cited by 1 | Viewed by 1315
Abstract
In this paper, we experimentally investigate the effects of degradation observed on 0.15-µm GaN HEMT devices when operating under realistic power amplifier conditions. The latter will be applied to the devices under test (DUT) by exploiting a low-frequency load-pull characterization technique that provides [...] Read more.
In this paper, we experimentally investigate the effects of degradation observed on 0.15-µm GaN HEMT devices when operating under realistic power amplifier conditions. The latter will be applied to the devices under test (DUT) by exploiting a low-frequency load-pull characterization technique that provides information consistent with RF operation, with the advantage of revealing electrical quantities not directly detectable at high frequency. Quantities such as the resistive gate current, play a fundamental role in the analysis of technology reliability. The experiments will be carried out on DUTs of the same periphery considering two different power amplifier operations: a saturated class-AB condition, that emphasizes the degradation effects produced by high temperatures due to power dissipation, and a class-E condition, that enhances the effects of high electric fields. The experiments will be carried out at 30 °C and 100 °C, and the results will be compared to evaluate how a specific RF condition can impact on the device degradation. Such a kind of comparison, to the authors’ knowledge, has never been carried out and represents the main novelty of the present study. Full article
Show Figures

Figure 1

Back to TopTop