Pattern Recognition and Sensor Fusion Solutions in Intelligent Sensor Systems

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 78461

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mechatronics and Automation, Faculty of Engineering, University of Szeged, 6725 Szeged, Hungary
Interests: intelligent sensor systems; wireless sensor networks; sensor calibration; inertial and magnetic sensors; sensor applications; human-machine interfaces; wearable sensors; sensor fusion; localization; intelligent transportation systems; vehicle detection and classification systems; robotics; mobile robots; multi-robot systems; pattern recognition; signal processing; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
Interests: information and communication technologies; signal processing; information theory; data mining and knowledge discovery; sensors; feedback systems: biomechanical; electrical; monetary; social; economic
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
Interests: sensors; wearable devices; digital signal processing; motion analysis; motion pattern recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Software Engineering Institute, John von Neumann Faculty of Informatics, Óbuda University, 1034 Budapest, Hungary
Interests: machine learning; deep neural networks; parallel programming
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recent advances in technology have led to the development of various intelligent sensor systems, in which pattern recognition and sensor fusion algorithms play a crucial role in most of the cases. For effective operation of these algorithms, advanced solutions are required in many areas, such as pre-processing, feature extraction, feature selection, classification, state estimation, implementation, etc.

Intelligent sensor systems can be based on signals of various sensor types. Many applications use sensors which provide 2D or 3D data, such as cameras and LiDARs, where computer vision solutions are required. Others apply time-series analysis on signals collected from acoustic sensors, inertial sensors (IMU), magnetometers, etc. Both types or their fusion are widely used in industrial, medical, health, and entertainment applications (e.g., robotics, pose estimation, human–computer interaction, navigation, intelligent transportation systems, activity, and movement analysis, etc.).

The processing can be performed on a central unit or decentralized, where the required computations are done on the embedded system of the units. Since most of the applications require real-time operation, the design of these pattern recognition and sensor fusion algorithms and their implementation on the embedded systems are challenging tasks.

The aim of this Special Issue is to invite high-quality research papers and up-to-date reviews that address challenging topics of sensory signal-based pattern recognition and sensor fusion. Topics of interest include, but are not limited to, the following:

  • Sensor calibration, pre-processing of signals;
  • Signal and image analysis, feature extraction, and feature selection methods;
  • Machine learning, decision-making, and classification methods;
  • Sensor fusion methods;
  • Implementation of pattern recognition and sensor fusion algorithms on embedded systems;
  • Novel applications of sensory signal-based pattern recognition and sensor fusion methods.

Dr. Peter Sarcevic
Prof. Dr. Sašo Tomažič
Dr. Akos Odry
Dr. Sara Stančin
Dr. Gábor Kertész
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • pattern recognition 
  • sensor fusion 
  • intelligent sensor systems 
  • machine learning 
  • classification methods 
  • pre-processing 
  • signal and image analysis 
  • computer vision 
  • embedded systems 
  • sensor applications

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (29 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 3445 KiB  
Article
Fusion of Coherent and Non-Coherent Pol-SAR Features for Land Cover Classification
by Konstantinos Karachristos, Georgia Koukiou and Vassilis Anastassopoulos
Electronics 2024, 13(3), 634; https://doi.org/10.3390/electronics13030634 - 2 Feb 2024
Viewed by 828
Abstract
Remote Sensing plays a fundamental role in acquiring crucial information about the Earth’s surface from a distance, especially through fully polarimetric data, which offers a rich source of information for diverse applications. However, extracting meaningful insights from this intricate data necessitates sophisticated techniques. [...] Read more.
Remote Sensing plays a fundamental role in acquiring crucial information about the Earth’s surface from a distance, especially through fully polarimetric data, which offers a rich source of information for diverse applications. However, extracting meaningful insights from this intricate data necessitates sophisticated techniques. In addressing this challenge, one predominant trend that has emerged is known as target decomposition techniques. These techniques can be broadly classified into coherent and non-coherent methods. Each of these methods provides high-quality information using different procedures. In this context, this paper introduces innovative feature fusion techniques, amalgamating coherent and non-coherent information. While coherent techniques excel in detailed exploration and specific feature extraction, non-coherent methods offer a broader perspective. Our feature fusion techniques aim to harness the strengths of both approaches, providing a comprehensive and high-quality fusion of information. In the first approach, features derived from Pauli coherent decomposition, Freeman–Durden non-coherent technique, and the Symmetry criterion from Cameron’s stepwise algorithm are combined to construct a sophisticated feature vector. This fusion is achieved using the well-established Fisher Linear Discriminant Analysis algorithm. In the second approach, the Symmetry criterion serves as the basis for fusing coherent and non-coherent coefficients, resulting in the creation of a new feature vector. Both approaches aim to exploit information simultaneously extracted from coherent and non-coherent methods in feature extraction from Remote Sensing data through fusion at the feature level. To evaluate the effectiveness of the feature generated by the proposed fusion techniques, we employ a land cover classification procedure. This involves utilizing a basic classifier, achieving overall accuracies of approximately 82% and 86% for each of the two proposed techniques. Furthermore, the accuracy in individual classes surpasses 92%. The evaluation aims to gauge the effectiveness of the fusion methods in enhancing feature extraction from fully polarimetric data and opens avenues for further exploration in the integration of coherent and non-coherent features for remote sensing applications. Full article
Show Figures

Figure 1

20 pages, 9396 KiB  
Article
Camera–LiDAR Calibration Using Iterative Random Sampling and Intersection Line-Based Quality Evaluation
by Ju Hee Yoo, Gu Beom Jung, Ho Gi Jung and Jae Kyu Suhr
Electronics 2024, 13(2), 249; https://doi.org/10.3390/electronics13020249 - 5 Jan 2024
Cited by 1 | Viewed by 1119
Abstract
This paper proposes a novel camera–LiDAR calibration method that utilizes an iterative random sampling and intersection line-based quality evaluation using a foldable plane pair. Firstly, this paper suggests using a calibration object consisting of two small planes with ChArUco patterns, which is easy [...] Read more.
This paper proposes a novel camera–LiDAR calibration method that utilizes an iterative random sampling and intersection line-based quality evaluation using a foldable plane pair. Firstly, this paper suggests using a calibration object consisting of two small planes with ChArUco patterns, which is easy to make and convenient to carry. Secondly, the proposed method adopts an iterative random sampling to make the calibration procedure robust against sensor data noise and incorrect object recognition. Lastly, this paper proposes a novel quality evaluation method based on the dissimilarity between two intersection lines of the plane pairs from the two sensors. Thus, the proposed method repeats random sampling of sensor data, extrinsic parameter estimation, and quality evaluation of the estimation result in order to determine the most appropriate calibration result. Furthermore, this method can also be used for the LiDAR–LiDAR calibration with a slight modification. In experiments, the proposed method was quantitively evaluated using simulation data and qualitatively assessed using real-world data. The experimental results show that the proposed method can successfully perform both camera–LiDAR and LiDAR–LiDAR calibrations while outperforming the previous approaches. Full article
Show Figures

Figure 1

22 pages, 11689 KiB  
Article
Automatic Stroke Measurement Method in Speed Skating: Analysis of the First 100 m after the Start
by Yeong-Je Park, Ji-Yeon Moon and Eui Chul Lee
Electronics 2023, 12(22), 4651; https://doi.org/10.3390/electronics12224651 - 15 Nov 2023
Viewed by 1362
Abstract
In speed skating, the number of strokes in the first 100 m section serves as an important metric of final performance. However, the conventional method, relying on human vision, has limitations in terms of real-time counting and accuracy. This study presents a solution [...] Read more.
In speed skating, the number of strokes in the first 100 m section serves as an important metric of final performance. However, the conventional method, relying on human vision, has limitations in terms of real-time counting and accuracy. This study presents a solution for counting strokes in the first 100 m of a speed skating race, aiming to overcome the limitations of human vision. The method uses image recognition technology, specifically MediaPipe, to track key body joint coordinates during the skater’s motion. These coordinates are calculated into important body angles, including those from the shoulder to the knee and from the pelvis to the ankle. To quantify the skater’s motion, the study introduces generalized labeling logic (GLL), a key index derived from angle data. The GLL signal is refined using Gaussian filtering to remove noise, and the number of inflection points in the filtered GLL signal is used to determine the number of strokes. The method was designed with a focus on frontal videos and achieved an excellent accuracy of 99.91% when measuring stroke counts relative to actual counts. This technology has great potential for enhancing training and evaluation in speed skating. Full article
Show Figures

Figure 1

24 pages, 8292 KiB  
Article
AP-NURSE: A Modular Tool to Ease the Life of Patients Suffering from Cognitive Diseases
by Štefan Čerba, Branislav Vrban, Jakub Lüley, Marian Vojs, Martin Vrška, Miroslav Behúl, Jozef Bendík, Matej Cenký, František Janíček, Libor Majer and Gábor Gyepes
Electronics 2023, 12(18), 3818; https://doi.org/10.3390/electronics12183818 - 9 Sep 2023
Viewed by 928
Abstract
This paper presents the development and testing of the AP-NURSE monitoring tool aimed to ease the everyday life of patients with Alzheimer’s and Parkinson’s diseases as well as those who take care of such patients. It summarises the activities carried out as part [...] Read more.
This paper presents the development and testing of the AP-NURSE monitoring tool aimed to ease the everyday life of patients with Alzheimer’s and Parkinson’s diseases as well as those who take care of such patients. It summarises the activities carried out as part of the international Interreg CE niCE-life project. The team of STU designed, developed, constructed, and tested the AP-NURSE intelligent tool. The paper starts with demographic information on Slovakia. It highlights the fact that population aging is a significant issue, which requires measures to mitigate its consequences at local and European levels. The next part describes the final design of AP-NURSE Home and Care devices and their versions, including details on their hardware and software. A significant part of the paper deals with the testing of AP-NURSE in both laboratory and authentic environments. The laboratory testing was carried out directly at STU. The final testing in an authentic environment was conducted in two social and healthcare centres in Bratislava and in Warsaw. After the activities described in this paper, AP-NURSE reached TRL-6. The AP-NURSE Home platform developed based on the ESP microcontrollers demonstrated the best functionality and has the potential to be used in real practice. Full article
Show Figures

Figure 1

25 pages, 22129 KiB  
Article
TLI-YOLOv5: A Lightweight Object Detection Framework for Transmission Line Inspection by Unmanned Aerial Vehicle
by Hanqiang Huang, Guiwen Lan, Jia Wei, Zhan Zhong, Zirui Xu, Dongbo Li and Fengfan Zou
Electronics 2023, 12(15), 3340; https://doi.org/10.3390/electronics12153340 - 4 Aug 2023
Cited by 4 | Viewed by 2377
Abstract
Unmanned aerial vehicles (UAVs) have become an important tool for transmission line inspection, and the inspection images taken by UAVs often contain complex backgrounds and many types of targets, which poses many challenges to object detection algorithms. In this paper, we propose a [...] Read more.
Unmanned aerial vehicles (UAVs) have become an important tool for transmission line inspection, and the inspection images taken by UAVs often contain complex backgrounds and many types of targets, which poses many challenges to object detection algorithms. In this paper, we propose a lightweight object detection framework, TLI-YOLOv5, for transmission line inspection tasks. Firstly, we incorporate the parameter-free attention module SimAM into the YOLOv5 network. This integration enhances the network’s feature extraction capabilities, without introducing additional parameters. Secondly, we introduce the Wise-IoU (WIoU) loss function to evaluate the quality of anchor boxes and allocate various gradient gains to them, aiming to improve network performance and generalization capabilities. Furthermore, we employ transfer learning and cosine learning rate decay to further enhance the model’s performance. The experimental evaluations performed on our UAV transmission line inspection dataset reveal that, in comparison to the original YOLOv5n, TLI-YOLOv5 increases precision by 0.40%, recall by 4.01%, F1 score by 1.69%, mean average precision at 50% IoU (mAP50) by 2.91%, and mean average precision from 50% to 95% IoU (mAP50-95) by 0.74%, while maintaining a recognition speed of 76.1 frames per second and model size of only 4.15 MB, exhibiting attributes such as small size, high speed, and ease of deployment. With these advantages, TLI-YOLOv5 proves more adept at meeting the requirements of modern, large-scale transmission line inspection operations, providing a reliable, efficient solution for such demanding tasks. Full article
Show Figures

Figure 1

17 pages, 3063 KiB  
Article
Online Outdoor Terrain Classification Algorithm for Wheeled Mobile Robots Equipped with Inertial and Magnetic Sensors
by Peter Sarcevic, Dominik Csík, Richard Pesti, Sara Stančin, Sašo Tomažič, Vladimir Tadic, Juvenal Rodriguez-Resendiz, József Sárosi and Akos Odry
Electronics 2023, 12(15), 3238; https://doi.org/10.3390/electronics12153238 - 26 Jul 2023
Cited by 2 | Viewed by 1735
Abstract
Terrain classification provides valuable information for both control and navigation algorithms of wheeled mobile robots. In this paper, a novel online outdoor terrain classification algorithm is proposed for wheeled mobile robots. The algorithm is based on only time-domain features with both low computational [...] Read more.
Terrain classification provides valuable information for both control and navigation algorithms of wheeled mobile robots. In this paper, a novel online outdoor terrain classification algorithm is proposed for wheeled mobile robots. The algorithm is based on only time-domain features with both low computational and low memory requirements, which are extracted from the inertial and magnetic sensor signals. Multilayer perceptron (MLP) neural networks are applied as classifiers. The algorithm is tested on a measurement database collected using a prototype measurement system for various outdoor terrain types. Different datasets were constructed based on various setups of processing window sizes, used sensor types, and robot speeds. To examine the possibilities of the three applied sensor types in the application, the features extracted from the measurement data of the different sensors were tested alone, in pairs and fused together. The algorithm is suitable to operate online on the embedded system of the mobile robot. The achieved results show that using the applied time-domain feature set the highest classification efficiencies on unknown data can be above 98%. It is also shown that the gyroscope provides higher classification rates than the widely used accelerometer. The magnetic sensor alone cannot be effectively used but fusing the data of this sensor with the data of the inertial sensors can improve the performance. Full article
Show Figures

Figure 1

23 pages, 8354 KiB  
Article
Low Cost PID Controller for Student Digital Control Laboratory Based on Arduino or STM32 Modules
by Krzysztof Sozański
Electronics 2023, 12(15), 3235; https://doi.org/10.3390/electronics12153235 - 26 Jul 2023
Cited by 2 | Viewed by 4636
Abstract
In the teaching process, it is important that students do not carry out exercises only by computer simulations, but also that they carry out research in real time. In times of distance learning during the COVID-19 pandemic, it would be necessary to find [...] Read more.
In the teaching process, it is important that students do not carry out exercises only by computer simulations, but also that they carry out research in real time. In times of distance learning during the COVID-19 pandemic, it would be necessary to find a solution so that the students can perform such exercises individually at home. Therefore, it has become necessary to develop cheap and simple modules of digital controllers along with analog objects with adjustable order and time constants. This paper describes a low-cost proportional–integral–derivative (PID) controller for teaching students control techniques and analog control objects in real time. The PID controller is based on the cheap and widely available microcontroller modules Arduino or STM32. The advantage of this solution is that the algorithm of the digital PID controller is calculated every constant period of time. Both the solutions presented in the paper have been successfully tested by students in practice during remote learning during the COVID-19 pandemic. Full article
Show Figures

Figure 1

22 pages, 10205 KiB  
Article
Research on Road Sign Detection and Visual Depth Perception Technology for Mobile Robots
by Jianwei Zhao and Yushuo Liu
Electronics 2023, 12(14), 3202; https://doi.org/10.3390/electronics12143202 - 24 Jul 2023
Cited by 1 | Viewed by 1287
Abstract
To accomplish the task of detecting and avoiding road signs by mobile robots for autonomous running, in this paper, we propose a method of road sign detection and visual depth perception based on improved Yolov5 and improved centroid depth value filtering. First, the [...] Read more.
To accomplish the task of detecting and avoiding road signs by mobile robots for autonomous running, in this paper, we propose a method of road sign detection and visual depth perception based on improved Yolov5 and improved centroid depth value filtering. First, the Yolov5 model has a large number of parameters, a large computational volume, and a large model size, which is difficult to deploy to the CPU side (industrial control computer) of the robot mobile platform. To solve this problem, the study proposes a lightweight Yolov5-SC3FB model. Compared with the original Yolov5n model, the Yolov5-SC3FB model only loses lower detection accuracy, the parameter volume is reduced to 0.19 M, the computational volume is reduced to 0.5 GFLOPS, and the model size is only 0.72 MB, making it easy to deploy on mobile robot platforms. Secondly, the obtained depth value of the center point of the bounding box is 0 due to the influence of noise. To solve this problem, we proposed an improved filtering method for the depth value of the center point in the study, and the relative error of its depth measurement is only 2%. Finally, the improved Yolov5-SC3FB model is fused with the improved filtering method for acquiring centroid depth values and the fused algorithm is deployed to the mobile robot platform. We verified the effectiveness of this fusion algorithm for the detection and avoidance of road signs of the robot. Thus, it can enable the mobile robot to correctly perceive the environment and achieve autonomous running. Full article
Show Figures

Figure 1

20 pages, 2526 KiB  
Article
Automatic Labeling of Natural Landmarks for Wheelchair Motion Planning
by Ba-Viet Ngo, Thanh-Hai Nguyen and Chi Cuong Vu
Electronics 2023, 12(14), 3093; https://doi.org/10.3390/electronics12143093 - 17 Jul 2023
Cited by 1 | Viewed by 1167
Abstract
Labeling landmarks for the mobile plan of the automatic electric wheelchair is essential, because it can assist disabled people. In particular, labeled landmark images will help the wheelchairs to locate landmarks and move more accurately and safely. Here, we propose an automatic detection [...] Read more.
Labeling landmarks for the mobile plan of the automatic electric wheelchair is essential, because it can assist disabled people. In particular, labeled landmark images will help the wheelchairs to locate landmarks and move more accurately and safely. Here, we propose an automatic detection of natural landmarks in RGBD images for navigation of mobile platforms in an indoor environment. This method can reduce the time for manually collecting and creating a dataset of landmarks. The wheelchair, equipped with a camera system, is allowed to move along corridors to detect and label natural landmarks automatically. These landmarks contain the camera and wheelchair positions with the 3D coordinates when storing the labeled landmark. The feature density method is comprised of Oriented FAST and Rotated BRIEF (ORB) feature extractors. Moreover, the central coordinates of the marked points in the obtained RGB images will be mapped to the images with the depth axis for determining the position of the RGB-D camera system in the spatial domain. An encoder and kinematics equations are applied to determine the position during movement. As expected, the system shows good results, such as a high IoU value of over 0.8 at a distance of less than 2 m and a fast time of 41.66 ms for object detection. This means that our technique is very effective for the automatic movement of the wheelchair. Full article
Show Figures

Figure 1

21 pages, 8353 KiB  
Article
Lightweight Infrared and Visible Image Fusion via Adaptive DenseNet with Knowledge Distillation
by Zongqing Zhao, Shaojing Su, Junyu Wei, Xiaozhong Tong and Weijia Gao
Electronics 2023, 12(13), 2773; https://doi.org/10.3390/electronics12132773 - 22 Jun 2023
Cited by 6 | Viewed by 1895
Abstract
The fusion of infrared and visible images produces a complementary image that captures both infrared radiation information and visible texture structure details using the respective sensors. However, the current deep-learning-based fusion approaches mainly tend to prioritize visual quality and statistical metrics, leading to [...] Read more.
The fusion of infrared and visible images produces a complementary image that captures both infrared radiation information and visible texture structure details using the respective sensors. However, the current deep-learning-based fusion approaches mainly tend to prioritize visual quality and statistical metrics, leading to an increased model complexity and weight parameter sizes. To address these challenges, we propose a novel dual-light fusion approach using adaptive DenseNet with knowledge distillation to learn and compress from pre-existing fusion models, which achieves the goals of model compression through the use of hyperparameters such as the width and depth of the model network. The effectiveness of our proposed approach is evaluated on a new dataset comprising three public datasets (MSRS, M3FD, and LLVIP), and both qualitative and quantitative experimental results show that the distillated adaptive DenseNet model effectively matches the original fusion models’ performance with smaller model weight parameters and shorter inference times. Full article
Show Figures

Graphical abstract

18 pages, 16443 KiB  
Article
AI-Based Real-Time Star Tracker
by Guy Carmeli and Boaz Ben-Moshe
Electronics 2023, 12(9), 2084; https://doi.org/10.3390/electronics12092084 - 2 May 2023
Cited by 1 | Viewed by 6836
Abstract
Many systems on Earth and in space require precise orientation when observing the sky, particularly for objects that move at high speeds in space, such as satellites, spaceships, and missiles. These systems often rely on star trackers, which are devices that use star [...] Read more.
Many systems on Earth and in space require precise orientation when observing the sky, particularly for objects that move at high speeds in space, such as satellites, spaceships, and missiles. These systems often rely on star trackers, which are devices that use star patterns to determine the orientation of the spacecraft. However, traditional star trackers are often expensive and have limitations in their accuracy and robustness. To address these challenges, this research aims to develop a high-performance and cost-effective AI-based Real-Time Star Tracker system as a basic platform for micro/nanosatellites. The system uses existing hardware, such as FPGAs and cameras, which are already part of many avionics systems, to extract line-of-sight (LOS) vectors from sky images. The algorithm implemented in this research is a “lost-in-space” algorithm that uses a self-organizing neural network map (SOM) for star pattern recognition. SOM is an unsupervised machine learning algorithm that is usually used for data visualization, clustering, and dimensionality reduction. Today’s technologies enable star-based navigation, making matching a sky image to the star map an important aspect of navigation. This research addresses the need for reliable, low-cost, and high-performance star trackers, which can accurately recognize star patterns from sky images with a success rate of about 98% in approximately 870 microseconds. Full article
Show Figures

Figure 1

13 pages, 3559 KiB  
Article
Passive IoT Optical Fiber Sensor Network for Water Level Monitoring with Signal Processing of Feature Extraction
by Hoon-Keun Lee, Youngmi Kim, Sungbaek Park and Joonyoung Kim
Electronics 2023, 12(8), 1823; https://doi.org/10.3390/electronics12081823 - 12 Apr 2023
Cited by 2 | Viewed by 1891
Abstract
This paper presents a real-time remote water level monitoring system based on dense wavelength division multiplexing (DWDM)-passive optical fiber sensor (OFS) network for the application of the Internet of Things (IoT). This network employs a broadband light source based on amplified spontaneous emission [...] Read more.
This paper presents a real-time remote water level monitoring system based on dense wavelength division multiplexing (DWDM)-passive optical fiber sensor (OFS) network for the application of the Internet of Things (IoT). This network employs a broadband light source based on amplified spontaneous emission (ASE) as a seed light. This ASE light is spectrum-sliced by an athermal type arrayed waveguide grating (200 GHz × 16 channel), then distributed towards multiple sensing units (SU). Here, 16 SUs are installed vertically at the specified height in the water pool according to the design specification (i.e., spatial resolution). Then, each SU reflects an optical spectrum having a different reflection coefficient depending on the surrounding medium (e.g., air or water). By measuring these reflected optical spectra with an optical spectrum analyzer, the water level can be easily recognized in real time. However, as the sensing distance increases, system performance is severely degraded due to the Rayleigh Back-Scattering of the ASE light. As a result, the remote sensing capability is limited at a short distance (i.e., <10 km). To overcome this limitation, we propose a simple signal processing technique based on feature extraction of received optical spectra, which includes embedding a peak detection algorithm with a signal validation check. For the specific, the proposed signal processing performs the peak power detection, signal quality monitoring, and determination/display of the actual water level through three function modules, i.e., data save/load module, signal processing module, and Human–Machine Interface display module. In particular, the signal quality of the remote sensing network can be easily monitored through several factors, such as the number of spectral peaks, the wavelength spacing between neighboring peaks and the pattern of detected peak power. Moreover, by using this validation check algorithm, it is also possible to diagnose various error types (such as peak detection error, loss of data and so on) according to the pattern of measured optical spectra. As a result, the IoT sensor network can recognize 17 different level statuses for the water level measurement from a distance of about 25 km away without active devices such as optical amplifiers (i.e., passive remote sensing). Full article
Show Figures

Figure 1

10 pages, 2355 KiB  
Article
A Novel Fiducial Point Extraction Algorithm to Detect C and D Points from the Acceleration Photoplethysmogram (CnD)
by Saad Abdullah, Abdelakram Hafid, Mia Folke, Maria Lindén and Annica Kristoffersson
Electronics 2023, 12(5), 1174; https://doi.org/10.3390/electronics12051174 - 28 Feb 2023
Cited by 7 | Viewed by 2028
Abstract
The extraction of relevant features from the photoplethysmography signal for estimating certain physiological parameters is a challenging task. Various feature extraction methods have been proposed in the literature. In this study, we present a novel fiducial point extraction algorithm to detect c and [...] Read more.
The extraction of relevant features from the photoplethysmography signal for estimating certain physiological parameters is a challenging task. Various feature extraction methods have been proposed in the literature. In this study, we present a novel fiducial point extraction algorithm to detect c and d points from the acceleration photoplethysmogram (APG), namely “CnD”. The algorithm allows for the application of various pre-processing techniques, such as filtering, smoothing, and removing baseline drift; the possibility of calculating first, second, and third photoplethysmography derivatives; and the implementation of algorithms for detecting and highlighting APG fiducial points. An evaluation of the CnD indicated a high level of accuracy in the algorithm’s ability to identify fiducial points. Out of 438 APG fiducial c and d points, the algorithm accurately identified 434 points, resulting in an accuracy rate of 99%. This level of accuracy was consistent across all the test cases, with low error rates. These findings indicate that the algorithm has a high potential for use in practical applications as a reliable method for detecting fiducial points. Thereby, it provides a valuable new resource for researchers and healthcare professionals working in the analysis of photoplethysmography signals. Full article
Show Figures

Figure 1

18 pages, 3007 KiB  
Article
Hyperspectral Image Classification with Deep CNN Using an Enhanced Elephant Herding Optimization for Updating Hyper-Parameters
by Kavitha Munishamaiaha, Senthil Kumar Kannan, DhilipKumar Venkatesan, Michał Jasiński, Filip Novak, Radomir Gono and Zbigniew Leonowicz
Electronics 2023, 12(5), 1157; https://doi.org/10.3390/electronics12051157 - 27 Feb 2023
Cited by 3 | Viewed by 2310
Abstract
Deep learning approaches based on convolutional neural networks (CNNs) have recently achieved success in computer vision, demonstrating significant superiority in the domain of image processing. For hyperspectral image (HSI) classification, convolutional neural networks are an efficient option. Hyperspectral image classification approaches are often [...] Read more.
Deep learning approaches based on convolutional neural networks (CNNs) have recently achieved success in computer vision, demonstrating significant superiority in the domain of image processing. For hyperspectral image (HSI) classification, convolutional neural networks are an efficient option. Hyperspectral image classification approaches are often based on spectral information. Convolutional neural networks are used for image classification in order to achieve greater performance. The complex computation in convolutional neural networks requires hyper-parameters that attain high accuracy outputs, and this process needs more computational time and effort. Following up on the proposed technique, a bio-inspired metaheuristic strategy based on an enhanced form of elephant herding optimization is proposed in this research paper. It allows one to automatically search for and target the suitable values of convolutional neural network hyper-parameters. To design an automatic system for hyperspectral image classification, the enhanced elephant herding optimization (EEHO) with the AdaBound optimizer is implemented for the tuning and updating of the hyper-parameters of convolutional neural networks (CNN–EEHO–AdaBound). The validation of the convolutional network hyper-parameters should produce a highly accurate response of high-accuracy outputs in order to achieve high-level accuracy in HSI classification, and this process takes a significant amount of processing time. The experiments are carried out on benchmark datasets (Indian Pines and Salinas) for evaluation. The proposed methodology outperforms state-of-the-art methods in a performance comparative analysis, with the findings proving its effectiveness. The results show the improved accuracy of HSI classification by optimising and tuning the hyper-parameters. Full article
Show Figures

Figure 1

17 pages, 6194 KiB  
Article
LiDAR SLAM with a Wheel Encoder in a Featureless Tunnel Environment
by Iulian Filip, Juhyun Pyo, Meungsuk Lee and Hangil Joe
Electronics 2023, 12(4), 1002; https://doi.org/10.3390/electronics12041002 - 17 Feb 2023
Cited by 9 | Viewed by 4796
Abstract
Simultaneous localization and mapping (SLAM) represents a crucial algorithm in the autonomous navigation of ground vehicles. Several studies were conducted to improve the SLAM algorithm using various sensors and robot platforms. However, only a few works have focused on applications inside low-illuminated featureless [...] Read more.
Simultaneous localization and mapping (SLAM) represents a crucial algorithm in the autonomous navigation of ground vehicles. Several studies were conducted to improve the SLAM algorithm using various sensors and robot platforms. However, only a few works have focused on applications inside low-illuminated featureless tunnel environments. In this work, we present an improved SLAM algorithm using wheel encoder data from an autonomous ground vehicle (AGV) to obtain robust performance in a featureless tunnel environment. The improved SLAM system uses FAST-LIO2 LiDAR SLAM as the baseline algorithm, and the additional wheel encoder sensor data are integrated into the baseline SLAM structure using the extended Kalman filter (EKF) algorithm. The EKF algorithm is used after the LiDAR odometry estimation and before the mapping process of FAST-LIO2. The prediction step uses the wheel encoder and inertial measurement unit (IMU) data, while the correction step uses the FAST-LIO2 LiDAR state estimation. We used an AGV to conduct experiments in flat and inclined terrain sections in a tunnel environment. The results showed that the mapping and the localization process in the SLAM algorithm was greatly improved in a featureless tunnel environment considering both inclined and flat terrains. Full article
Show Figures

Figure 1

18 pages, 6590 KiB  
Article
Inertial Measurement Units’ Reliability for Measuring Knee Joint Angle during Road Cycling
by Saša Obradović and Sara Stančin
Electronics 2023, 12(3), 751; https://doi.org/10.3390/electronics12030751 - 2 Feb 2023
Cited by 1 | Viewed by 2408
Abstract
We explore the reliability of joint angles in road cycling obtained using inertial measurement units. The considered method relies on 3D accelerometer and gyroscope measurements obtained from two such units, appropriately attached to two adjacent body parts, measuring the angle of the connecting [...] Read more.
We explore the reliability of joint angles in road cycling obtained using inertial measurement units. The considered method relies on 3D accelerometer and gyroscope measurements obtained from two such units, appropriately attached to two adjacent body parts, measuring the angle of the connecting joint. We investigate the effects of applying a simple drift compensation technique and an error-state Kalman filter. We consider the knee joint angle in particular, and conduct two measurement trials, a 5 and a 20 minute one, for seven subjects, in a closed, supervised laboratory environment and use optical motion tracking system measurements as reference. As expected from an adaptive solution, the Kalman filter gives more stable results. The root mean square errors per pedalling cycle are below 3.2°, for both trials and for all subjects, implying that inertial measurement units are not only reliable for short measurements, as is usually assumed, but can be reliably used for longer measurements as well. Considering the accuracy of the results, the presented method can be reasonably extended to open, unsupervised environments and other joint angles. Implementing the presented method supports the development of cheaper and more efficient monitoring equipment, as opposed to using expensive motion tracking systems. Consequently, cyclists can have an affordable way of position tracking, leading to not only better bicycle fitting, but to the avoidance and prevention of certain injuries as well. Full article
Show Figures

Figure 1

15 pages, 4904 KiB  
Article
Two-Dimensional Positioning with Machine Learning in Virtual and Real Environments
by Dávid Kóczi, József Németh and József Sárosi
Electronics 2023, 12(3), 671; https://doi.org/10.3390/electronics12030671 - 29 Jan 2023
Viewed by 2223
Abstract
In this paper, a ball-on-plate control system driven only by a neural network agent is presented. Apart from reinforcement learning, no other control solution or support was applied. The implemented device, driven by two servo motors, learned by itself through thousands of iterations [...] Read more.
In this paper, a ball-on-plate control system driven only by a neural network agent is presented. Apart from reinforcement learning, no other control solution or support was applied. The implemented device, driven by two servo motors, learned by itself through thousands of iterations how to keep the ball in the center of the resistive sensor. We compared the real-world performance of agents trained in both a real-world and in a virtual environment. We also examined the efficacy of a virtually pre-trained agent fine-tuned in the real environment. The obtained results were evaluated and compared to see which approach makes a good basis for the implementation of a control task implemented purely with a neural network. Full article
Show Figures

Figure 1

21 pages, 3263 KiB  
Article
Detection and Classification of Printed Circuit Boards Using YOLO Algorithm
by Matko Glučina, Nikola Anđelić, Ivan Lorencin and Zlatan Car
Electronics 2023, 12(3), 667; https://doi.org/10.3390/electronics12030667 - 29 Jan 2023
Cited by 17 | Viewed by 6033
Abstract
Printed circuit boards (PCBs) are an indispensable part of every electronic device used today. With its computing power, it performs tasks in much smaller dimensions, but the process of making and sorting PCBs can be a challenge in PCB factories. One of the [...] Read more.
Printed circuit boards (PCBs) are an indispensable part of every electronic device used today. With its computing power, it performs tasks in much smaller dimensions, but the process of making and sorting PCBs can be a challenge in PCB factories. One of the main challenges in factories that use robotic manipulators for “pick and place” tasks are object orientation because the robotic manipulator can misread the orientation of the object and thereby grasp it incorrectly, and for this reason, object segmentation is the ideal solution for the given problem. In this research, the performance, memory size, and prediction of the YOLO version 5 (YOLOv5) semantic segmentation algorithm are tested for the needs of detection, classification, and segmentation of PCB microcontrollers. YOLOv5 was trained on 13 classes of PCB images from a publicly available dataset that was modified and consists of 1300 images. The training was performed using different structures of YOLOv5 neural networks, while nano, small, medium, and large neural networks were used to select the optimal network for the given challenge. Additionally, the total dataset was cross validated using 5-fold cross validation and evaluated using mean average precision, precision, recall, and F1-score classification metrics. The results showed that large, computationally demanding neural networks are not required for the given challenge, as demonstrated by the YOLOv5 small model with the obtained mAP, precision, recall, and F1-score in the amounts of 0.994, 0.996, 0.995, and 0.996, respectively. Based on the obtained evaluation metrics and prediction results, the obtained model can be implemented in factories for PCB sorting applications. Full article
Show Figures

Figure 1

30 pages, 16474 KiB  
Article
Development of a New Robust Stable Walking Algorithm for a Humanoid Robot Using Deep Reinforcement Learning with Multi-Sensor Data Fusion
by Çağrı Kaymak, Ayşegül Uçar and Cüneyt Güzeliş
Electronics 2023, 12(3), 568; https://doi.org/10.3390/electronics12030568 - 22 Jan 2023
Cited by 10 | Viewed by 4518
Abstract
The difficult task of creating reliable mobility for humanoid robots has been studied for decades. Even though several different walking strategies have been put forth and walking performance has substantially increased, stability still needs to catch up to expectations. Applications for Reinforcement Learning [...] Read more.
The difficult task of creating reliable mobility for humanoid robots has been studied for decades. Even though several different walking strategies have been put forth and walking performance has substantially increased, stability still needs to catch up to expectations. Applications for Reinforcement Learning (RL) techniques are constrained by low convergence and ineffective training. This paper develops a new robust and efficient framework based on the Robotis-OP2 humanoid robot combined with a typical trajectory-generating controller and Deep Reinforcement Learning (DRL) to overcome these limitations. This framework consists of optimizing the walking trajectory parameters and posture balancing system. Multi-sensors of the robot are used for parameter optimization. Walking parameters are optimized using the Dueling Double Deep Q Network (D3QN), one of the DRL algorithms, in the Webots simulator. The hip strategy is adopted for the posture balancing system. Experimental studies are carried out in both simulation and real environments with the proposed framework and Robotis-OP2’s walking algorithm. Experimental results show that the robot performs more stable walking with the proposed framework than Robotis-OP2’s walking algorithm. It is thought that the proposed framework will be beneficial for researchers studying in the field of humanoid robot locomotion. Full article
Show Figures

Graphical abstract

18 pages, 1225 KiB  
Article
XDLL: Explained Deep Learning LiDAR-Based Localization and Mapping Method for Self-Driving Vehicles
by Anas Charroud, Karim El Moutaouakil, Vasile Palade and Ali Yahyaouy
Electronics 2023, 12(3), 567; https://doi.org/10.3390/electronics12030567 - 22 Jan 2023
Cited by 6 | Viewed by 2038
Abstract
Self-driving vehicles need a robust positioning system to continue the revolution in intelligent transportation. Global navigation satellite systems (GNSS) are most commonly used to accomplish this task because of their ability to accurately locate the vehicle in the environment. However, recent publications have [...] Read more.
Self-driving vehicles need a robust positioning system to continue the revolution in intelligent transportation. Global navigation satellite systems (GNSS) are most commonly used to accomplish this task because of their ability to accurately locate the vehicle in the environment. However, recent publications have revealed serious cases where GNSS fails miserably to determine the position of the vehicle, for example, under a bridge, in a tunnel, or in dense forests. In this work, we propose a framework architecture of explaining deep learning LiDAR-based (XDLL) models that predicts the position of the vehicles by using only a few LiDAR points in the environment, which ensures the required fastness and accuracy of interactions between vehicle components. The proposed framework extracts non-semantic features from LiDAR scans using a clustering algorithm. The identified clusters serve as input to our deep learning model, which relies on LSTM and GRU layers to store the trajectory points and convolutional layers to smooth the data. The model has been extensively tested with short- and long-term trajectories from two benchmark datasets, Kitti and NCLT, containing different environmental scenarios. Moreover, we investigated the obtained results by explaining the contribution of each cluster feature by using several explainable methods, including Saliency, SmoothGrad, and VarGrad. The analysis showed that taking the mean of all the clusters as an input for the model is enough to obtain better accuracy compared to the first model, and it reduces the time consumption as well. The improved model is able to obtain a mean absolute positioning error of below one meter for all sequences in the short- and long-term trajectories. Full article
Show Figures

Figure 1

13 pages, 1238 KiB  
Article
Human Activity Recognition for the Identification of Bullying and Cyberbullying Using Smartphone Sensors
by Vincenzo Gattulli, Donato Impedovo, Giuseppe Pirlo and Lucia Sarcinella
Electronics 2023, 12(2), 261; https://doi.org/10.3390/electronics12020261 - 4 Jan 2023
Cited by 10 | Viewed by 2111
Abstract
The smartphone is an excellent source of data; it is possible to extrapolate smartphone sensor values and, through Machine Learning approaches, perform anomaly detection analysis characterized by human behavior. This work exploits Human Activity Recognition (HAR) models and techniques to identify human activity [...] Read more.
The smartphone is an excellent source of data; it is possible to extrapolate smartphone sensor values and, through Machine Learning approaches, perform anomaly detection analysis characterized by human behavior. This work exploits Human Activity Recognition (HAR) models and techniques to identify human activity performed while filling out a questionnaire via a smartphone application, which aims to classify users as Bullying, Cyberbullying, Victims of Bullying, and Victims of Cyberbullying. The purpose of the work is to discuss a new smartphone methodology that combines the final label elicited from the cyberbullying/bullying questionnaire (Bully, Cyberbully, Bullying Victim, and Cyberbullying Victim) and the human activity performed (Human Activity Recognition) while the individual fills out the questionnaire. The paper starts with a state-of-the-art analysis of HAR to arrive at the design of a model that could recognize everyday life actions and discriminate them from actions resulting from alleged bullying activities. Five activities were considered for recognition: Walking, Jumping, Sitting, Running and Falling. The best HAR activity identification model then is applied to the Dataset derived from the “Smartphone Questionnaire Application” experiment to perform the analysis previously described. Full article
Show Figures

Figure 1

14 pages, 10310 KiB  
Article
A Point Cloud Registration Method Based on Histogram and Vector Operations
by Yanan Zhang, Dayong Qiao, Changfeng Xia and Qing He
Electronics 2022, 11(24), 4172; https://doi.org/10.3390/electronics11244172 - 14 Dec 2022
Cited by 2 | Viewed by 1921
Abstract
Point-pair registration in a real scene remains a challenging task, due to the complexity of solving three transformations (scale, rotation, and displacement) simultaneously, and the influence of noise and outliers. Aimed at this problem, a registration algorithm based on histogram and vector operations [...] Read more.
Point-pair registration in a real scene remains a challenging task, due to the complexity of solving three transformations (scale, rotation, and displacement) simultaneously, and the influence of noise and outliers. Aimed at this problem, a registration algorithm based on histogram and vector operations is proposed in this paper. This approach converts point-based operations into vector-based operations, thereby decomposing the registration process into three independent steps solving for scale transformation factors, rotation matrices, and displacement vectors, which reduces the complexity of the solution and avoids the effects of scaling in the other two processes. The influence of outliers on the global transformation matrix is simultaneously eliminated using a histogram-based approach. Algorithm performance was evaluated through a comparison with the most commonly used SVD method in a series of validation experiments, with results showing that our methodology was superior to SVD in the cases with scaling transformation or outliers. Full article
Show Figures

Figure 1

20 pages, 2778 KiB  
Article
A Novel Algorithm for Automated Human Single-Lead ECG Pre-Annotation and Beat-to-Beat Separation for Heartbeat Classification Using Autoencoders
by Abdallah Benhamida and Miklos Kozlovszky
Electronics 2022, 11(23), 4021; https://doi.org/10.3390/electronics11234021 - 4 Dec 2022
Viewed by 4174
Abstract
An electrocardiogram (ECG) is used to check the electrical activity of the heart over a limited short-term or long-term period. Short-term observations are often used in hospitals or clinics, whereas long-term observations (often called continuous or stream-like ECG observations) are used to monitor [...] Read more.
An electrocardiogram (ECG) is used to check the electrical activity of the heart over a limited short-term or long-term period. Short-term observations are often used in hospitals or clinics, whereas long-term observations (often called continuous or stream-like ECG observations) are used to monitor the heart’s electrical activity on a daily basis and during different daily activities, such as sleeping, running, eating, etc. ECG can reflect the normal sinus rhythm as well as different heart problems, which might vary from Premature Atrial Contractions (PAC) and Premature Ventricular Contractions (PVC), to Sinus Arrest and many other problems. In order to perform such monitoring on a daily basis, it is very important to implement automated solutions that perform most of the work of the daily ECG analysis and could alert the doctors in case of any problem, and could even detect the type of the problem in order for the doctors to have an immediate report about the patient’s health status. This paper aims to provide a workflow for abnormal ECG signals detection from different sources of digitized ECG signals, including ambulatory devices. We propose an algorithm for ECG pre-annotation and beat-to-beat separation for heartbeat classification using Autoencoders. The algorithm includes the training of different models for different types of abnormal ECG signals, and has shown promising results for normal sinus rhythm and PVC compared to other solutions. This solution is proposed for no-noise and noisy signals as well. Full article
Show Figures

Figure 1

14 pages, 2934 KiB  
Article
Thermal Biometric Features for Drunk Person Identification Using Multi-Frame Imagery
by Georgia Koukiou
Electronics 2022, 11(23), 3924; https://doi.org/10.3390/electronics11233924 - 27 Nov 2022
Viewed by 2198
Abstract
In this work, multi-frame thermal imagery of the face of a person is employed for drunk identification. Regions with almost constant temperature on the face of sober and drunk persons are thoroughly examined for their capability to discriminate intoxication. Novel image processing approaches [...] Read more.
In this work, multi-frame thermal imagery of the face of a person is employed for drunk identification. Regions with almost constant temperature on the face of sober and drunk persons are thoroughly examined for their capability to discriminate intoxication. Novel image processing approaches as well as feature extraction techniques are developed to support the drunk identification procedure. These techniques constitute novel ideas in the theory of image analysis and algorithm development. Nonlinear anisotropic diffusion is employed for a light smoothing on the images before feature extraction. Feature vector extraction is based on morphological operations performed on the isothermal regions on the face. The classifier chosen to verify the drunk person discrimination capabilities of the procedure is a Support Vector Machine (SVM). Obviously, the isothermal regions on the face change their shape and size with alcohol consumption. Consequently, intoxication identification can be carried out based only on the thermal signatures of the drunk person, while the signature of the corresponding sober person is not needed. A sample of 41 participants who drank in a controlled alcohol consumption procedure was employed for creating the database, which contains 4100 thermal images. The proposed method for intoxication identification achieves a success rate of over 86% and constitutes a fast non-invasive test that can replace existing breathalyzer check up. Full article
Show Figures

Figure 1

25 pages, 9487 KiB  
Article
Switching Trackers for Effective Sensor Fusion in Advanced Driver Assistance Systems
by Ankur Deo and Vasile Palade
Electronics 2022, 11(21), 3586; https://doi.org/10.3390/electronics11213586 - 3 Nov 2022
Cited by 1 | Viewed by 1754
Abstract
Modern cars utilise Advanced Driver Assistance Systems (ADAS) in several ways. In ADAS, the use of multiple sensors to gauge the environment surrounding the ego-vehicle offers numerous advantages, as fusing information from more than one sensor helps to provide highly reliable and error-free [...] Read more.
Modern cars utilise Advanced Driver Assistance Systems (ADAS) in several ways. In ADAS, the use of multiple sensors to gauge the environment surrounding the ego-vehicle offers numerous advantages, as fusing information from more than one sensor helps to provide highly reliable and error-free data. The fused data is typically then fed to a tracker algorithm, which helps to reduce noise and compensate for situations when received sensor data is temporarily absent or spurious, or to counter the offhand false positives and negatives. The performances of these constituent algorithms vary vastly under different scenarios. In this paper, we focus on the variation in the performance of tracker algorithms in sensor fusion due to the alteration in external conditions in different scenarios, and on the methods for countering that variation. We introduce a sensor fusion architecture, where the tracking algorithm is spontaneously switched to achieve the utmost performance under all scenarios. By employing a Real-time Traffic Density Estimation (RTDE) technique, we may understand whether the ego-vehicle is currently in dense or sparse traffic conditions. A highly dense traffic (or congested traffic) condition would mean that external circumstances are non-linear; similarly, sparse traffic conditions would mean that the probability of linear external conditions would be higher. We also employ a Traffic Sign Recognition (TSR) algorithm, which is able to monitor for construction zones, junctions, schools, and pedestrian crossings, thereby identifying areas which have a high probability of spontaneous, on-road occurrences. Based on the results received from the RTDE and TSR algorithms, we construct a logic which switches the tracker of the fusion architecture between an Extended Kalman Filter (for linear external scenarios) and an Unscented Kalman Filter (for non-linear scenarios). This ensures that the fusion model always uses the tracker that is best suited for its current needs, thereby yielding consistent accuracy across multiple external scenarios, compared to the fusion models that employ a fixed single tracker. Full article
Show Figures

Figure 1

11 pages, 1636 KiB  
Article
3D Skeletal Volume Templates for Deep Learning-Based Activity Recognition
by Ali Seydi Keçeli, Aydın Kaya and Ahmet Burak Can
Electronics 2022, 11(21), 3567; https://doi.org/10.3390/electronics11213567 - 1 Nov 2022
Cited by 1 | Viewed by 1573
Abstract
Due to advances in depth sensor technologies, the use of these sensors has positively impacted studies of human-computer interaction and activity recognition. This study proposes a novel 3D action template generated from depth sequence data and two methods to classify single-person activities using [...] Read more.
Due to advances in depth sensor technologies, the use of these sensors has positively impacted studies of human-computer interaction and activity recognition. This study proposes a novel 3D action template generated from depth sequence data and two methods to classify single-person activities using this 3D template. Initially, joint skeleton-based three-dimensional volumetric templates are constructed from depth information. In the first method, images are obtained from various view angles of these three-dimensional templates and used for deep feature extraction using a pre-trained convolutional neural network. In our experiments, a pre-trained AlexNet model trained with the ImageNet dataset is used as a feature extractor. Activities are classified by combining deep features and Histogram of Oriented Gradient (HOG) features. The second approach proposes a three-dimensional convolutional neural network that uses volumetric templates as input for activity classification. Proposed methods have been tested with two publicly available datasets. Experiments provided promising results compared with the other studies presented in the literature. Full article
Show Figures

Figure 1

16 pages, 4614 KiB  
Article
High Sound-Contrast Inverse Scattering by MR-MF-DBIM Scheme
by Luong Thi Theu, Tran Quang-Huy, Tran Duc-Nghia, Vijender Kumar Solanki, Tran Duc-Tan and João Manuel R. S. Tavares
Electronics 2022, 11(19), 3203; https://doi.org/10.3390/electronics11193203 - 6 Oct 2022
Cited by 1 | Viewed by 1651
Abstract
In ultrasound tomography, cross-sectional images represent the spatial distribution of the physical parameters of a target of interest, which can be obtained based on scattered ultrasound measurements. These measurements can be obtained from dense datasets collected at different transmitter and receiver locations, and [...] Read more.
In ultrasound tomography, cross-sectional images represent the spatial distribution of the physical parameters of a target of interest, which can be obtained based on scattered ultrasound measurements. These measurements can be obtained from dense datasets collected at different transmitter and receiver locations, and using multiple frequencies. The Born approximation method, which provides a simple linear relationship between the objective function and the scattering field, has been adopted to resolve the inverse scattering problem. The distorted Born iterative method (DBIM), which utilizes the first-order Born approximation, is a productive diffraction tomography scheme. In this article, the range of interpolation applications is extended at the multilayer level, taking into account the advantages of integrating this multilayer level with multiple frequencies for the DBIM. Specifically, we consider: (a) a multi-resolution technique, i.e., a multi-step interpolation for the DBIM: MR-DBIM, with the advantage that the normalized absolute error is reduced by 18.67% and 37.21% in comparison with one-step interpolation DBIM and typical DBIM, respectively; (b) the integration of multi-resolution and multi-frequency techniques with the DBIM: MR-MF-DBIM, which is applied to image targets with high sound contrast in a strongly scattering medium. Relative to MR-DBIM, this integration offers a 44.01% reduction in the normalized absolute error. Full article
Show Figures

Figure 1

26 pages, 6631 KiB  
Article
Intelligent Sensors and Environment Driven Biological Comfort Control Based Smart Energy Consumption System
by Muhammad Asim Nawaz, Bilal Khan, Sahibzada Muhammad Ali, Muhammad Awais, Muhammad Bilal Qureshi, Muhammad Jawad, Chaudhry Arshad Mehmood, Zahid Ullah and Sheraz Aslam
Electronics 2022, 11(16), 2622; https://doi.org/10.3390/electronics11162622 - 21 Aug 2022
Cited by 2 | Viewed by 2444
Abstract
The smart energy consumption of any household, maintaining the thermal comfort level of the occupant, is of great interest. Sensors and Internet-of-Things (IoT)-based intelligent hardware setups control the home appliances intelligently and ensure smart energy consumption, considering environment parameters. However, the effects of [...] Read more.
The smart energy consumption of any household, maintaining the thermal comfort level of the occupant, is of great interest. Sensors and Internet-of-Things (IoT)-based intelligent hardware setups control the home appliances intelligently and ensure smart energy consumption, considering environment parameters. However, the effects of environment-driven consumer body dynamics on energy consumption, considering consumer comfort level, need to be addressed. Therefore, an Energy Management System (EMS) is modeled, designed, and analyzed with hybrid inputs, namely environmental perturbations, and consumer body biological shifts, such as blood flows in skin, fat, muscle, and core layers (affecting consumer comfort through blood-driven-sensations). In this regard, our work incorporates 69 Multi-Node (MN) Stolwijik’s consumer body interfaced with an indoor (room) electrical system capable of mutual interactions exchange from room environmental parameters and consumer body dynamics. The mutual energy transactions are controlled with classical PID and Adaptive Neuro-Fuzzy-Type II (NF-II) systems inside the room dimensions. Further, consumer comfort, room environment, and energy consumption relations with bidirectional control are demonstrated, analyzed, and tested in MATLAB/Simulink to reduce energy consumption and energy cost. Finally, six different cases are considered in simulation settings and for performance validation, one case is validated as real-time hardware experimentation. Full article
Show Figures

Figure 1

Review

Jump to: Research

22 pages, 993 KiB  
Review
Review of Obstacle Detection Systems for Collision Avoidance of Autonomous Underwater Vehicles Tested in a Real Environment
by Rafał Kot
Electronics 2022, 11(21), 3615; https://doi.org/10.3390/electronics11213615 - 5 Nov 2022
Cited by 12 | Viewed by 4998
Abstract
The high efficiency of obstacle detection system (ODS) is essential to obtain the high performance of autonomous underwater vehicles (AUVs) carrying out a mission in a complex underwater environment. Based on the previous literature analysis, that include path planning and collision avoidance algorithms, [...] Read more.
The high efficiency of obstacle detection system (ODS) is essential to obtain the high performance of autonomous underwater vehicles (AUVs) carrying out a mission in a complex underwater environment. Based on the previous literature analysis, that include path planning and collision avoidance algorithms, the solutions which operation was confirmed by tests in a real-world environment were selected for this paper consideration. These studies were subjected to a deeper analysis assessing the effectiveness of the obstacle detection algorithms. The analysis shows that over the years, ODSs being improved and provide greater detection accuracy that results in better AUV response time. Almost all analysed methods are based on the conventional approach to obstacle detection. In the future, even better ODSs parameters could be achieved by using artificial intelligence (AI) methods. Full article
Show Figures

Figure 1

Back to TopTop