Topic Editors

Department of Mechanical Engineering (ME), University of California, Merced, CA 95343, USA
School of Engineering, Macquarie University, Sydney, NSW 2109, Australia
Department of Engineering, University of Campania Luigi Vanvitelli, Via Roma 29, 81031 Aversa, Italy
Department of Electrical & Computer Engineering, Faculty of Engineering, McMaster University, Hamilton, ON L8S 4L8, Canada
Department of Materials Science and Engineering, Gachon University, 1342 Seongnam-Daero, Sujeong-Gu, Seongnam-si 13120, Republic of Korea
REQUIMTE/LAQV, ISEP, Polytechnic of Porto, Rua Dr. António Bernardino de Almeida, 4249-015 Porto, Portugal

Artificial Intelligence in Sensors

Abstract submission deadline
closed (30 October 2022)
Manuscript submission deadline
closed (31 December 2022)
Viewed by
527110

Topic Information

Dear Colleagues,

This topic comprises several interdisciplinary research areas that cover the main aspects of sensor sciences.

There has been an increase in both the capabilities and challenges related within numerous application fields, e.g., Robotics, Industry 4.0, Automotive, Smart Cities, Medicine, Diagnosis, Food, Telecommunication, Environmental and Civil Applications, Health, and Security.

The associated applications constantly require novel sensors to improve their capabilities and challenges. Thus, Sensor Sciences represents a paradigm characterized by the integration of modern nanotechnologies and nanomaterials into manufacturing and industrial practice to develop tools for several application fields. The primary underlying goal of Sensor Sciences is to facilitate the closer interconnection and control of complex systems, machines, devices, and people to increase the support provided to humans in several application fields.

Sensor Sciences comprises a set of significant research fields, including:

  • Chemical Sensors;
  • Biosensors;
  • Physical Sensors;
  • Optical Sensors;
  • Microfluidics;
  • Sensor Networks;
  • Electronics and Mechanicals;
  • Mechatronics;
  • Internet of Things platforms and their applications;
  • Materials and Nanomaterials;
  • Data Security;
  • Artificial Intelligence;
  • Robotics;
  • UAV; UGV;
  • Remote Sensing;
  • Measurement Science and Technology;
  • Cognitive Computing Platforms and Applications, including technologies related to Artificial Intelligence, Machine Learning, as well as Big Data Processing and Analytics;
  • Advanced Interactive Technologies, including Augmented/Virtual Reality;
  • Advanced Data Visualization Techniques;
  • Instrumentation Science and Technology;
  • Nanotechnology;
  • Organic Electronics, Biophotonics, and Smart Materials;
  • Optoelectronics, Photonics, and Optical fibers;
  • MEMS, Microwaves, and Acoustic waves;
  • Physics and Biophysics;
  • Interdisciplinary Sciences.

This topic aims to collect the results of research in these fields and others. Therefore, submitting papers within those areas connected to sensors is strongly encouraged.

Prof. Dr. Yangquan Chen
Prof. Dr. Subhas Mukhopadhyay
Prof. Dr. Nunzio Cennamo
Prof. Dr. M. Jamal Deen
Dr. Junseop Lee
Prof. Dr. Simone Morais
Topic Editors

Keywords

  • artificial intelligence
  • sensors
  • machine learning
  • deep learning
  • computer vision
  • image processing
  • smart sensing
  • smart sensor
  • intelligent sensor
  • unmanned aerial vehicle
  • UAV
  • unmanned ground vehicle
  • UGV
  • robotics
  • machine vision

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Sensors
sensors
3.4 7.3 2001 16.8 Days CHF 2600
Remote Sensing
remotesensing
4.2 8.3 2009 24.7 Days CHF 2700
Applied Sciences
applsci
2.5 5.3 2011 17.8 Days CHF 2400
Electronics
electronics
2.6 5.3 2012 16.8 Days CHF 2400
Drones
drones
4.4 5.6 2017 21.7 Days CHF 2600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (138 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
18 pages, 9886 KiB  
Article
Study of Channel-Type Dynamic Weighing System for Goat Herds
by Zhiwen He, Kun Wang, Jingjing Chen, Jile Xin, Hongwei Du, Ding Han and Ying Guo
Electronics 2023, 12(7), 1715; https://doi.org/10.3390/electronics12071715 - 4 Apr 2023
Cited by 1 | Viewed by 1688
Abstract
This paper proposes a design method for a channel-type sheep dynamic weighing system to address the current problems encountered by pastoralists at home and abroad, such as time-consuming sheep weighing, difficulties with data collection, and management of the stress response in sheep. The [...] Read more.
This paper proposes a design method for a channel-type sheep dynamic weighing system to address the current problems encountered by pastoralists at home and abroad, such as time-consuming sheep weighing, difficulties with data collection, and management of the stress response in sheep. The complete system includes a hardware structure, dynamic characteristics, and a Kalman-aggregate empirical modal decomposition algorithm (Kalman-EEMD algorithm) model for dynamic data processing. The noise suppression effects of the Kalman filter, the empirical modal decomposition (EMD), and the ensemble empirical modal decomposition (EEMD) algorithms are discussed for practical applications. Field tests showed that the Kalman-EEMD algorithm model has the advantages of high accuracy, efficiency, and reliability. The maximum error between the actual weight of the goats and the measured value in the experiments was 1.0%, with an average error as low as 0.40% and a maximum pass time of 2 s for a single goat. This meets the needs for weighing accuracy and goat flock weighing rates. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

28 pages, 5817 KiB  
Article
3D LiDAR Point Cloud Registration Based on IMU Preintegration in Coal Mine Roadways
by Lin Yang, Hongwei Ma, Zhen Nie, Heng Zhang, Zhongyang Wang and Chuanwei Wang
Sensors 2023, 23(7), 3473; https://doi.org/10.3390/s23073473 - 26 Mar 2023
Cited by 5 | Viewed by 3321
Abstract
Point cloud registration is the basis of real-time environment perception for robots using 3D LiDAR and is also the key to robust simultaneous localization and mapping (SLAM) for robots. Because LiDAR point clouds are characterized by local sparseness and motion distortion, the point [...] Read more.
Point cloud registration is the basis of real-time environment perception for robots using 3D LiDAR and is also the key to robust simultaneous localization and mapping (SLAM) for robots. Because LiDAR point clouds are characterized by local sparseness and motion distortion, the point cloud features of coal mine roadway environments show a weak texture and degradation. Therefore, for these environments, the traditional point cloud registration method to register directly will lead to problems, such as a decline in registration accuracy, z-axis drift, and map ghosting. To solve the above problems, we propose a point cloud registration method based on IMU preintegration with the sensor characteristics of LiDAR and IMU. The system framework of this method mainly consists of four modules: IMU preintegration, point cloud preprocessing, point cloud frame matching and point cloud registration. First, IMU sensor data are introduced, and IMU linear interpolation is used to correct the motion distortion in LiDAR scanning, and the IMU preintegration error function is constructed. Second, the point cloud segmentation is performed using the ground segmentation method of RANSAC to provide additional ground constraints for the z-axis displacement and to remove the unstable flawed points from the point cloud. On this basis, the LiDAR point cloud registration error function is constructed by extracting the feature corner points and feature plane points. Finally, the Gaussian Newton solution is used to optimize the constraint relationship between the LiDAR odometry frames to minimize the error function, complete the LiDAR point cloud registration and better estimate the position and pose of the mobile robot. The experimental results show that compared with the traditional point cloud registration method, the proposed method has a higher point cloud registration accuracy, success rate and computational efficiency. The LiDAR odometry constructed using this method can better reflect the authenticity of the robot trajectory and has higher trajectory accuracy and smaller absolute position and pose error. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

17 pages, 11226 KiB  
Article
Temperature Drift Compensation of a MEMS Accelerometer Based on DLSTM and ISSA
by Gangqiang Guo, Bo Chai, Ruichu Cheng and Yunshuang Wang
Sensors 2023, 23(4), 1809; https://doi.org/10.3390/s23041809 - 6 Feb 2023
Cited by 13 | Viewed by 3380
Abstract
In order to improve the performance of a micro-electro-mechanical system (MEMS) accelerometer, three algorithms for compensating its temperature drift are proposed in this paper, including deep long short-term memory recurrent neural network (DLSTM-RNN, short DLSTM), DLSTM based on sparrow search algorithm (SSA), and [...] Read more.
In order to improve the performance of a micro-electro-mechanical system (MEMS) accelerometer, three algorithms for compensating its temperature drift are proposed in this paper, including deep long short-term memory recurrent neural network (DLSTM-RNN, short DLSTM), DLSTM based on sparrow search algorithm (SSA), and DLSTM based on improved SSA (ISSA). Moreover, the piecewise linear approximation (PLA) method is employed in this paper as a comparison to evaluate the impact of the proposed algorithm. First, a temperature experiment is performed to obtain the MEMS accelerometer’s temperature drift output (TDO). Then, we propose a real-time compensation model and a linear approximation model for neural network methods compensation and PLA method compensation, respectively. The real-time compensation model is a recursive method based on the TDO at the last moment. The linear approximation model considers the MEMS accelerometer’s temperature and TDO as input and output, respectively. Next, the TDO is analyzed and optimized by the real-time compensation model and the three algorithms mentioned before. Moreover, the TDO is also compensated by the linear approximation model and PLA method as a comparison. The compensation results show that the three neural network methods and the PLA method effectively compensate for the temperature drift of the MEMS accelerometer, and the DLSTM + ISSA method achieves the best compensation effect. After compensation by DLSTM + ISSA, the three Allen variance coefficients of the MEMS accelerometer that bias instability, rate random walk, and rate ramp are improved from 5.43×104mg, 4.33×105mg/s12, 1.18×106mg/s to 2.77×105mg, 1.14×106mg/s12, 2.63×108mg/s, respectively, with an increase of 96.68% on average. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

24 pages, 7656 KiB  
Article
Adaptive Multi-Scale Fusion Blind Deblurred Generative Adversarial Network Method for Sharpening Image Data
by Baoyu Zhu, Qunbo Lv and Zheng Tan
Drones 2023, 7(2), 96; https://doi.org/10.3390/drones7020096 - 30 Jan 2023
Cited by 4 | Viewed by 3010
Abstract
Drone and aerial remote sensing images are widely used, but their imaging environment is complex and prone to image blurring. Existing CNN deblurring algorithms usually use multi-scale fusion to extract features in order to make full use of aerial remote sensing blurred image [...] Read more.
Drone and aerial remote sensing images are widely used, but their imaging environment is complex and prone to image blurring. Existing CNN deblurring algorithms usually use multi-scale fusion to extract features in order to make full use of aerial remote sensing blurred image information, but images with different degrees of blurring use the same weights, leading to increasing errors in the feature fusion process layer by layer. Based on the physical properties of image blurring, this paper proposes an adaptive multi-scale fusion blind deblurred generative adversarial network (AMD-GAN), which innovatively applies the degree of image blurring to guide the adjustment of the weights of multi-scale fusion, effectively suppressing the errors in the multi-scale fusion process and enhancing the interpretability of the feature layer. The research work in this paper reveals the necessity and effectiveness of a priori information on image blurring levels in image deblurring tasks. By studying and exploring the image blurring levels, the network model focuses more on the basic physical features of image blurring. Meanwhile, this paper proposes an image blurring degree description model, which can effectively represent the blurring degree of aerial remote sensing images. The comparison experiments show that the algorithm in this paper can effectively recover images with different degrees of blur, obtain high-quality images with clear texture details, outperform the comparison algorithm in both qualitative and quantitative evaluation, and can effectively improve the object detection performance of blurred aerial remote sensing images. Moreover, the average PSNR of this paper’s algorithm tested on the publicly available dataset RealBlur-R reached 41.02 dB, surpassing the latest SOTA algorithm. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

14 pages, 5734 KiB  
Article
Real-Time Finger-Writing Character Recognition via ToF Sensors on Edge Deep Learning
by Jiajin Zhang, Guoying Peng, Hongyu Yang, Chao Tan, Yaqing Tan and Hui Bai
Electronics 2023, 12(3), 685; https://doi.org/10.3390/electronics12030685 - 30 Jan 2023
Cited by 5 | Viewed by 3076
Abstract
Human–computer interaction is demanded for natural and convenient approaches, in which finger-writing recognition has aroused more and more attention. In this paper, a device-free finger-writing character recognition system based on an array of time-of-flight (ToF) distance sensors is presented. The ToF sensors acquire [...] Read more.
Human–computer interaction is demanded for natural and convenient approaches, in which finger-writing recognition has aroused more and more attention. In this paper, a device-free finger-writing character recognition system based on an array of time-of-flight (ToF) distance sensors is presented. The ToF sensors acquire distance values between sensors to a writing finger within a 9.5 × 15 cm square on a surface at specific time intervals and send distance data to a low-power microcontroller STM32F401, equipped with deep learning algorithms for real-time inference and recognition tasks. The proposed method enables one to distinguish 26 English lower-case letters by users writing with their fingers and does not require one to wear additional devices. All data used in this work were collected from 21 subjects (12 males and 9 females) to evaluate the proposed system in a real scenario. In this work, the performance of different deep learning algorithms, such as long short-term memory (LSTM), convolutional neural networks (CNNs) and bidirectional LSTM (BiLSTM), was evaluated. Thus, these algorithms provide high accuracy, where the best result is extracted from the LSTM, with 98.31% accuracy and 50 ms of maximum latency. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

18 pages, 4295 KiB  
Article
Deep Learning-Based Cost-Effective and Responsive Robot for Autism Treatment
by Aditya Singh, Kislay Raj, Teerath Kumar, Swapnil Verma and Arunabha M. Roy
Drones 2023, 7(2), 81; https://doi.org/10.3390/drones7020081 - 23 Jan 2023
Cited by 78 | Viewed by 6678
Abstract
Recent studies state that, for a person with autism spectrum disorder, learning and improvement is often seen in environments where technological tools are involved. A robot is an excellent tool to be used in therapy and teaching. It can transform teaching methods, not [...] Read more.
Recent studies state that, for a person with autism spectrum disorder, learning and improvement is often seen in environments where technological tools are involved. A robot is an excellent tool to be used in therapy and teaching. It can transform teaching methods, not just in the classrooms but also in the in-house clinical practices. With the rapid advancement in deep learning techniques, robots became more capable of handling human behaviour. In this paper, we present a cost-efficient, socially designed robot called ‘Tinku’, developed to assist in teaching special needs children. ‘Tinku’ is low cost but is full of features and has the ability to produce human-like expressions. Its design is inspired by the widely accepted animated character ‘WALL-E’. Its capabilities include offline speech processing and computer vision—we used light object detection models, such as Yolo v3-tiny and single shot detector (SSD)—for obstacle avoidance, non-verbal communication, expressing emotions in an anthropomorphic way, etc. It uses an onboard deep learning technique to localize the objects in the scene and uses the information for semantic perception. We have developed several lessons for training using these features. A sample lesson about brushing is discussed to show the robot’s capabilities. Tinku is cute, and loaded with lots of features, and the management of all the processes is mind-blowing. It is developed in the supervision of clinical experts and its condition for application is taken care of. A small survey on the appearance is also discussed. More importantly, it is tested on small children for the acceptance of the technology and compatibility in terms of voice interaction. It helps autistic kids using state-of-the-art deep learning models. Autism Spectral disorders are being increasingly identified today’s world. The studies show that children are prone to interact with technology more comfortably than a with human instructor. To fulfil this demand, we presented a cost-effective solution in the form of a robot with some common lessons for the training of an autism-affected child. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

11 pages, 2789 KiB  
Article
Electrolyte-Gated Graphene Field Effect Transistor-Based Ca2+ Detection Aided by Machine Learning
by Rong Zhang, Tiantian Hao, Shihui Hu, Kaiyang Wang, Shuhui Ren, Ziwei Tian and Yunfang Jia
Sensors 2023, 23(1), 353; https://doi.org/10.3390/s23010353 - 29 Dec 2022
Cited by 3 | Viewed by 2546
Abstract
Flexible electrolyte-gated graphene field effect transistors (Eg-GFETs) are widely developed as sensors because of fast response, versatility and low-cost. However, their sensitivities and responding ranges are often altered by different gate voltages. These bias-voltage-induced uncertainties are an obstacle in the development of Eg-GFETs. [...] Read more.
Flexible electrolyte-gated graphene field effect transistors (Eg-GFETs) are widely developed as sensors because of fast response, versatility and low-cost. However, their sensitivities and responding ranges are often altered by different gate voltages. These bias-voltage-induced uncertainties are an obstacle in the development of Eg-GFETs. To shield from this risk, a machine-learning-algorithm-based LgGFETs’ data analyzing method is studied in this work by using Ca2+ detection as a proof-of-concept. For the as-prepared Eg-GFET-Ca2+ sensors, their transfer and output features are first measured. Then, eight regression models are trained with the use of different machine learning algorithms, including linear regression, support vector machine, decision tree and random forest, etc. Then, the optimized model is obtained with the random-forest-method-treated transfer curves. Finally, the proposed method is applied to determine Ca2+ concentration in a calibration-free way, and it is found that the relation between the estimated and real Ca2+ concentrations is close-to y = x. Accordingly, we think the proposed method may not only provide an accurate result but also simplify the traditional calibration step in using Eg-GFET sensors. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

14 pages, 6335 KiB  
Article
Training Artificial Intelligence Algorithms with Automatically Labelled UAV Data from Physics-Based Simulation Software
by Jonathan Boone, Christopher Goodin, Lalitha Dabbiru, Christopher Hudson, Lucas Cagle and Daniel Carruth
Appl. Sci. 2023, 13(1), 131; https://doi.org/10.3390/app13010131 - 22 Dec 2022
Cited by 12 | Viewed by 2536
Abstract
Machine-learning (ML) requires human-labeled “truth” data to train and test. Acquiring and labeling this data can often be the most time-consuming and expensive part of developing trained models of convolutional neural networks (CNN). In this work, we show that an automated workflow using [...] Read more.
Machine-learning (ML) requires human-labeled “truth” data to train and test. Acquiring and labeling this data can often be the most time-consuming and expensive part of developing trained models of convolutional neural networks (CNN). In this work, we show that an automated workflow using automatically labeled synthetic data can be used to drastically reduce the time and effort required to train a machine learning algorithm for detecting buildings in aerial imagery acquired with low-flying unmanned aerial vehicles. The MSU Autonomous Vehicle Simulator (MAVS) was used in this work, and the process for integrating MAVS into an automated workflow is presented in this work, along with results for building detection with real and simulated images. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

12 pages, 2817 KiB  
Article
Non-Specific Responsive Nanogels and Plasmonics to Design MathMaterial Sensing Interfaces: The Case of a Solvent Sensor
by Nunzio Cennamo, Francesco Arcadio, Fiore Capasso, Devid Maniglio, Luigi Zeni and Alessandra Maria Bossi
Sensors 2022, 22(24), 10006; https://doi.org/10.3390/s222410006 - 19 Dec 2022
Cited by 3 | Viewed by 2227
Abstract
The combination of non-specific deformable nanogels and plasmonic optical probes provides an innovative solution for specific sensing using a generalistic recognition layer. Soft polyacrylamide nanogels that lack specific selectivity but are characterized by responsive behavior, i.e., shrinking and swelling dependent on the surrounding [...] Read more.
The combination of non-specific deformable nanogels and plasmonic optical probes provides an innovative solution for specific sensing using a generalistic recognition layer. Soft polyacrylamide nanogels that lack specific selectivity but are characterized by responsive behavior, i.e., shrinking and swelling dependent on the surrounding environment, were grafted to a gold plasmonic D-shaped plastic optical fiber (POF) probe. The nanogel–POF cyclically challenged with water or alcoholic solutions optically reported the reversible solvent-to-phase transitions of the nanomaterial, embodying a primary optical switch. Additionally, the non-specific nanogel–POF interface exhibited more degrees of freedom through which specific sensing was enabled. The real-time monitoring of the refractive index variations due to the time-related volume-to-phase transition effects of the nanogels enabled us to determine the environment’s characteristics and broadly classify solvents. Hence the nanogel–POF interface was a descriptor of mathematical functions for substance identification and classification processes. These results epitomize the concept of responsive non-specific nanomaterials to perform a multiparametric description of the environment, offering a specific set of features for the processing stage and particularly suitable for machine and deep learning. Thus, soft MathMaterial interfaces provide the ground to devise devices suitable for the next generation of smart intelligent sensing processes. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

14 pages, 44176 KiB  
Article
Towards Building a Distributed Virtual Flow Meter via Compressed Continual Learning
by Hasan Asy’ari Arief, Peter James Thomas, Kevin Constable and Aggelos K. Katsaggelos
Sensors 2022, 22(24), 9878; https://doi.org/10.3390/s22249878 - 15 Dec 2022
Viewed by 1790
Abstract
A robust–accurate estimation of fluid flow is the main building block of a distributed virtual flow meter. Unfortunately, a big leap in algorithm development would be required for this objective to come to fruition, mainly due to the inability of current machine learning [...] Read more.
A robust–accurate estimation of fluid flow is the main building block of a distributed virtual flow meter. Unfortunately, a big leap in algorithm development would be required for this objective to come to fruition, mainly due to the inability of current machine learning algorithms to make predictions outside the training data distribution. To improve predictions outside the training distribution, we explore the continual learning (CL) paradigm for accurately estimating the characteristics of fluid flow in pipelines. A significant challenge facing CL is the concept of catastrophic forgetting. In this paper, we provide a novel approach for how to address the forgetting problem via compressing the distributed sensor data to increase the capacity of the CL memory bank using a compressive learning algorithm. Through extensive experiments, we show that our approach provides around 8% accuracy improvement compared to other CL algorithms when applied to a real-world distributed sensor dataset collected from an oilfield. Noticeable accuracy improvement is also achieved when using our proposed approach with the CL benchmark datasets, achieving state-of-the-art accuracies for the CIFAR-10 dataset on blurry10 and blurry30 settings of 80.83% and 88.91%, respectively. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

23 pages, 4690 KiB  
Article
Visual/Inertial/GNSS Integrated Navigation System under GNSS Spoofing Attack
by Nianzu Gu, Fei Xing and Zheng You
Remote Sens. 2022, 14(23), 5975; https://doi.org/10.3390/rs14235975 - 25 Nov 2022
Cited by 9 | Viewed by 2057
Abstract
Visual/Inertial/GNSS (VIG) integrated navigation and positioning systems are widely used in unmanned vehicles and other systems. This VIG system is vulnerable to of GNSS spoofing attacks. Relevant research on the harm that spoofing causes to the system and performance analyses of VIG systems [...] Read more.
Visual/Inertial/GNSS (VIG) integrated navigation and positioning systems are widely used in unmanned vehicles and other systems. This VIG system is vulnerable to of GNSS spoofing attacks. Relevant research on the harm that spoofing causes to the system and performance analyses of VIG systems under GNSS spoofing are not sufficient. In this paper, an open-source VIG algorithm, VINS-Fusion, based on nonlinear optimization, is used to analyze the performance of the VIG system under a GNSS spoofing attack. The influence of the visual inertial odometer (VIO) scale estimation error and the transformation matrix deviation in the transition period of spoofing detection is analyzed. Deviation correction methods based on the GNSS-assisted scale compensation coefficient estimation method and optimal pose transformation matrix selection are proposed for VIG-integrated system in spoofing areas. For an area that the integrated system can revisit many times, a global pose map-matching method is proposed. An outfield experiment with a GNSS spoofing attack is carried out in this paper. The experiment result shows that, even if the GNSS measurements are seriously affected by the spoofing, the integrated system still can run independently, following the preset waypoint. The scale compensation coefficient estimation method, the optimal pose transformation matrix selection method and the global pose map-matching method can depress the estimation error under the circumstances of a spoofing attack. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

24 pages, 23655 KiB  
Article
A Multi-Channel Descriptor for LiDAR-Based Loop Closure Detection and Its Application
by Gang Wang, Xiaomeng Wei, Yu Chen, Tongzhou Zhang, Minghui Hou and Zhaohan Liu
Remote Sens. 2022, 14(22), 5877; https://doi.org/10.3390/rs14225877 - 19 Nov 2022
Cited by 5 | Viewed by 2272
Abstract
Simultaneous localization and mapping (SLAM) algorithm is a prerequisite for unmanned ground vehicle (UGV) localization, path planning, and navigation, which includes two essential components: frontend odometry and backend optimization. Frontend odometry tends to amplify the cumulative error continuously, leading to ghosting and drifting [...] Read more.
Simultaneous localization and mapping (SLAM) algorithm is a prerequisite for unmanned ground vehicle (UGV) localization, path planning, and navigation, which includes two essential components: frontend odometry and backend optimization. Frontend odometry tends to amplify the cumulative error continuously, leading to ghosting and drifting on the mapping results. However, loop closure detection (LCD) can be used to address this technical issue by significantly eliminating the cumulative error. The existing LCD methods decide whether a loop exists by constructing local or global descriptors and calculating the similarity between descriptors, which attaches great importance to the design of discriminative descriptors and effective similarity measurement mechanisms. In this paper, we first propose novel multi-channel descriptors (CMCD) to alleviate the lack of point cloud single information in the discriminative power of scene description. The distance, height, and intensity information of the point cloud is encoded into three independent channels of the shadow-casting region (bin) and then compressed it into a two-dimensional global descriptor. Next, an ORB-based dynamic threshold feature extraction algorithm (DTORB) is designed using objective 2D descriptors to describe the distributions of global and local point clouds. Then, a DTORB-based similarity measurement method is designed using the rotation-invariance and visualization characteristic of descriptor features to overcome the subjective tendency of the constant threshold ORB algorithm in descriptor feature extraction. Finally, verification is performed over KITTI odometry sequences and the campus datasets of Jilin University collected by us. The experimental results demonstrate the superior performance of our method to the state-of-the-art approaches. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

20 pages, 7044 KiB  
Article
Rolling Bearing Fault Diagnosis Using Hybrid Neural Network with Principal Component Analysis
by Keshun You, Guangqi Qiu and Yingkui Gu
Sensors 2022, 22(22), 8906; https://doi.org/10.3390/s22228906 - 17 Nov 2022
Cited by 42 | Viewed by 3097
Abstract
With the rapid development of fault prognostics and health management (PHM) technology, more and more deep learning algorithms have been applied to the intelligent fault diagnosis of rolling bearings, and although all of them can achieve over 90% diagnostic accuracy, the generality and [...] Read more.
With the rapid development of fault prognostics and health management (PHM) technology, more and more deep learning algorithms have been applied to the intelligent fault diagnosis of rolling bearings, and although all of them can achieve over 90% diagnostic accuracy, the generality and robustness of the models cannot be truly verified under complex extreme variable loading conditions. In this study, an end-to-end rolling bearing fault diagnosis model of a hybrid deep neural network with principal component analysis is proposed. Firstly, in order to reduce the complexity of deep learning computation, data pre-processing is performed by principal component analysis (PCA) with feature dimensionality reduction. The preprocessed data is imported into the hybrid deep learning model. The first layer of the model uses a CNN algorithm for denoising and simple feature extraction, the second layer makes use of bi-directional long and short memory (BiLSTM) for greater in-depth extraction of the data with time series features, and the last layer uses an attention mechanism for optimal weight assignment, which can further improve the diagnostic precision. The test accuracy of this model is fully comparable to existing deep learning fault diagnosis models, especially under low load; the test accuracy is 100% at constant load and nearly 90% for variable load, and the test accuracy is 72.8% at extreme variable load (2.205 N·m/s–0.735 N·m/s and 0.735 N·m/s–2.205 N·m/s), which are the worst possible load conditions. The experimental results fully prove that the model has reliable robustness and generality. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

13 pages, 5688 KiB  
Article
AIoT Precision Feeding Management System
by Cheng-Chang Chiu, Teh-Lu Liao, Chiung-Hsing Chen and Shao-En Kao
Electronics 2022, 11(20), 3358; https://doi.org/10.3390/electronics11203358 - 18 Oct 2022
Cited by 5 | Viewed by 2995
Abstract
Different fish species and different growth stages require different amounts of fish pellets. Excessive fish pellets increase the cost of aquaculture, and the leftover fish pellets sink to the bottom of the fish farm. This causes water pollution in the fish farm. Weather [...] Read more.
Different fish species and different growth stages require different amounts of fish pellets. Excessive fish pellets increase the cost of aquaculture, and the leftover fish pellets sink to the bottom of the fish farm. This causes water pollution in the fish farm. Weather changes and providing too many or too little fish pellets affect the growth of the fish. In light of the abovementioned factors, this article uses the artificial intelligence of things (AIoT) precision feeding management system to improve an existing fish feeder. The AIoT precision feeding management system is placed on the water surface of the breeding pond to measure the water surface fluctuations in the area of fish pellet application. The buoy, with s built-in three-axis accelerometer, senses the water surface fluctuations when the fish are foraging. Then, through the wireless transmission module, the data are sent back to the receiver and control device of the fish feeder. When the fish feeder receives the signal, it judges the returned value to adjust the feeding time. Through this system, the intelligent feeding of fish can be achieved by adjusting the amount of fish pellets in order to reduce the cost of aquaculture. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 8580 KiB  
Article
Intrinsic Calibration of Multi-Beam LiDARs for Agricultural Robots
by Na Sun, Quan Qiu, Zhengqiang Fan, Tao Li, Chao Ji, Qingchun Feng and Chunjiang Zhao
Remote Sens. 2022, 14(19), 4846; https://doi.org/10.3390/rs14194846 - 28 Sep 2022
Cited by 5 | Viewed by 2852
Abstract
With the advantages of high measurement accuracy and wide detection range, LiDARs have been widely used in information perception research to develop agricultural robots. However, the internal configuration of the laser transmitter layout changes with increasing sensor working duration, which makes it difficult [...] Read more.
With the advantages of high measurement accuracy and wide detection range, LiDARs have been widely used in information perception research to develop agricultural robots. However, the internal configuration of the laser transmitter layout changes with increasing sensor working duration, which makes it difficult to obtain accurate measurement with calibration files based on factory settings. To solve this problem, we investigate the intrinsic calibration of multi-beam laser sensors. Specifically, we calibrate the five intrinsic parameters of LiDAR with a nonlinear optimization strategy based on static planar models, which include measured distance, rotation angle, pitch angle, horizontal distance, and vertical distance. Firstly, we establish a mathematical model based on the physical structure of LiDAR. Secondly, we calibrate the internal parameters according to the mathematical model and evaluate the measurement accuracy after calibration. Here, we illustrate the parameter calibration with three steps: planar model estimation, objective function construction, and nonlinear optimization. We also introduce the ranging accuracy evaluation metrics, including the standard deviation of the distance from the laser scanning points to the planar models and the 3σ criterion. Finally, the experimental results show that the ranging error of calibrated sensors can be maintained within 3 cm, which verifies the effectiveness of the laser intrinsic calibration. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

22 pages, 6546 KiB  
Article
Optimal Compensation of MEMS Gyroscope Noise Kalman Filter Based on Conv-DAE and MultiTCN-Attention Model in Static Base Environment
by Zimin Huo, Fuchao Wang, Honghai Shen, Xin Sun, Jingzhong Zhang, Yaobin Li and Hairong Chu
Sensors 2022, 22(19), 7249; https://doi.org/10.3390/s22197249 - 24 Sep 2022
Cited by 10 | Viewed by 2662
Abstract
Errors in microelectromechanical systems (MEMS) inertial measurement units (IMUs) are large, complex, nonlinear, and time varying. The traditional noise reduction and compensation methods based on traditional models are not applicable. This paper proposes a noise reduction method based on multi-layer combined deep learning [...] Read more.
Errors in microelectromechanical systems (MEMS) inertial measurement units (IMUs) are large, complex, nonlinear, and time varying. The traditional noise reduction and compensation methods based on traditional models are not applicable. This paper proposes a noise reduction method based on multi-layer combined deep learning for the MEMS gyroscope in the static base state. In this method, the combined model of MEMS gyroscope is constructed by Convolutional Denoising Auto-Encoder (Conv-DAE) and Multi-layer Temporal Convolutional Neural with the Attention Mechanism (MultiTCN-Attention) model. Based on the robust data processing capability of deep learning, the noise features are obtained from the past gyroscope data, and the parameter optimization of the Kalman filter (KF) by the Particle Swarm Optimization algorithm (PSO) significantly improves the filtering and noise reduction accuracy. The experimental results show that, compared with the original data, the noise standard deviation of the filtering effect of the combined model proposed in this paper decreases by 77.81% and 76.44% on the x and y axes, respectively; compared with the existing MEMS gyroscope noise compensation method based on the Autoregressive Moving Average with Kalman filter (ARMA-KF) model, the noise standard deviation of the filtering effect of the combined model proposed in this paper decreases by 44.00% and 46.66% on the x and y axes, respectively, reducing the noise impact by nearly three times. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

9 pages, 17812 KiB  
Article
A Visual Saliency-Based Neural Network Architecture for No-Reference Image Quality Assessment
by Jihyoung Ryu
Appl. Sci. 2022, 12(19), 9567; https://doi.org/10.3390/app12199567 - 23 Sep 2022
Cited by 7 | Viewed by 2040
Abstract
Deep learning has recently been used to study blind image quality assessment (BIQA) in great detail. Yet, the scarcity of high-quality algorithms prevents from developing them further and being used in a real-time scenario. Patch-based techniques have been used to forecast the quality [...] Read more.
Deep learning has recently been used to study blind image quality assessment (BIQA) in great detail. Yet, the scarcity of high-quality algorithms prevents from developing them further and being used in a real-time scenario. Patch-based techniques have been used to forecast the quality of an image, but they typically award the picture quality score to an individual patch of the image. As a result, there would be a lot of misleading scores coming from patches. Some regions of the image are important and can contribute highly toward the right prediction of its quality. To prevent outlier regions, we suggest a technique with a visual saliency module which allows the only important region to bypass to the neural network and allows the network to only learn the important information required to predict the quality. The neural network architecture used in this study is Inception-ResNet-v2. We assess the proposed strategy using a benchmark database (KADID-10k) to show its efficacy. The outcome demonstrates better performance compared with certain popular no-reference IQA (NR-IQA) and full-reference IQA (FR-IQA) approaches. This technique is intended to be utilized to estimate the quality of an image being acquired in real time from drone imagery. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

18 pages, 4392 KiB  
Article
An Adaptive Group of Density Outlier Removal Filter: Snow Particle Removal from LiDAR Data
by Minh-Hai Le, Ching-Hwa Cheng, Don-Gey Liu and Thanh-Tuan Nguyen
Electronics 2022, 11(19), 2993; https://doi.org/10.3390/electronics11192993 - 21 Sep 2022
Cited by 9 | Viewed by 2768
Abstract
Light Detection And Ranging (LiDAR) is an important technology integrated into self-driving cars to enhance the reliability of these systems. Even with some advantages over cameras, it is still limited under extreme weather conditions such as heavy rain, fog, or snow. Traditional methods [...] Read more.
Light Detection And Ranging (LiDAR) is an important technology integrated into self-driving cars to enhance the reliability of these systems. Even with some advantages over cameras, it is still limited under extreme weather conditions such as heavy rain, fog, or snow. Traditional methods such as Radius Outlier Removal (ROR) and Statistical Outlier Removal (SOR) are limited in their ability to detect snow points in LiDAR point clouds. This paper proposes an Adaptive Group of Density Outlier Removal (AGDOR) filter that can remove snow particles more effectively in raw LiDAR point clouds, with verification on the Winter Adverse Driving Dataset (WADS). In our proposed method, an intensity threshold combined with a proposed outlier removal filter was employed. Outstanding performance was obtained, with higher accuracy up to 96% and processing speed of 0.51 s per frame in our result. In particular, our filter outperforms the state-of-the-art filter by achieving a 16.32% higher Precision at the same accuracy. However, our method archive is lower in recall than the state-of-the-art method. This clearly indicates that AGDOR retains a significant amount of object points from LiDAR. The results suggest that our filter would be useful for snow removal under harsh weathers for autonomous driving systems. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

17 pages, 4183 KiB  
Article
Line Structure Extraction from LiDAR Point Cloud Based on the Persistence of Tensor Feature
by Xuan Wang, Haiyang Lyu, Weiji He and Qian Chen
Appl. Sci. 2022, 12(18), 9190; https://doi.org/10.3390/app12189190 - 14 Sep 2022
Viewed by 2466
Abstract
The LiDAR point cloud has been widely used in scenarios of automatic driving, object recognition, structure reconstruction, etc., while it remains a challenging problem in line structure extraction, due to the noise and accuracy, especially in data acquired by consumer electronic devices. To [...] Read more.
The LiDAR point cloud has been widely used in scenarios of automatic driving, object recognition, structure reconstruction, etc., while it remains a challenging problem in line structure extraction, due to the noise and accuracy, especially in data acquired by consumer electronic devices. To address the issue, a line structure extraction method based on the persistence of tensor feature is proposed, and subsequently applied to the data acquired by an iPhone-based LiDAR sensor. The tensor of each point is encoded, voted, and aggregated by its neighborhood, and further decomposed into different geometric features in each dimension. Then, the line feature in the point cloud is represented and computed using the persistence of the tensor feature. Finally, the line structure is extracted based on the persistent homology according to the discrete Morse theory. With the LiDAR point cloud collected by the iPhone 12 Pro MAX, experiments are conducted, line structures are extracted from two different datasets, and results perform well in comparison with other related results. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

15 pages, 3833 KiB  
Article
N-Step Pre-Training and Décalcomanie Data Augmentation for Micro-Expression Recognition
by Chaehyeon Lee, Jiuk Hong and Heechul Jung
Sensors 2022, 22(17), 6671; https://doi.org/10.3390/s22176671 - 3 Sep 2022
Cited by 4 | Viewed by 2171
Abstract
Facial expressions are divided into micro- and macro-expressions. Micro-expressions are low-intensity emotions presented for a short moment of about 0.25 s, whereas macro-expressions last up to 4 s. To derive micro-expressions, participants are asked to suppress their emotions as much as possible while [...] Read more.
Facial expressions are divided into micro- and macro-expressions. Micro-expressions are low-intensity emotions presented for a short moment of about 0.25 s, whereas macro-expressions last up to 4 s. To derive micro-expressions, participants are asked to suppress their emotions as much as possible while watching emotion-inducing videos. However, it is a challenging process, and the number of samples collected tends to be less than those of macro-expressions. Because training models with insufficient data may lead to decreased performance, this study proposes two ways to solve the problem of insufficient data for micro-expression training. The first method involves N-step pre-training, which performs multiple transfer learning from action recognition datasets to those in the facial domain. Second, we propose Décalcomanie data augmentation, which is based on facial symmetry, to create a composite image by cutting and pasting both faces around their center lines. The results show that the proposed methods can successfully overcome the data shortage problem and achieve high performance. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

22 pages, 6148 KiB  
Article
Deep-Learning-Based Method for Estimating Permittivity of Ground-Penetrating Radar Targets
by Hui Wang, Shan Ouyang, Qinghua Liu, Kefei Liao and Lijun Zhou
Remote Sens. 2022, 14(17), 4293; https://doi.org/10.3390/rs14174293 - 31 Aug 2022
Cited by 10 | Viewed by 3054
Abstract
Correctly estimating the relative permittivity of buried targets is crucial for accurately determining the target type, geometric size, and reconstruction of shallow surface geological structures. In order to effectively identify the dielectric properties of buried targets, on the basis of extracting the feature [...] Read more.
Correctly estimating the relative permittivity of buried targets is crucial for accurately determining the target type, geometric size, and reconstruction of shallow surface geological structures. In order to effectively identify the dielectric properties of buried targets, on the basis of extracting the feature information of B-SCAN images, we propose an inversion method based on a deep neural network (DNN) to estimate the relative permittivity of targets. We first take the physical mechanism of ground-penetrating radar (GPR), working in the reflection measurement mode as the constrain condition, and then design a convolutional neural network (CNN) to extract the feature hyperbola of the underground target, which is used to calculate the buried depth of the target and the relative permittivity of the background medium. We further build a regression network and train the network model with the labeled sample set to estimate the relative permittivity of the target. Tests were carried out on the GPR simulation dataset and the field dataset of underground rainwater pipelines, respectively. The results show that the inversion method has high accuracy in estimating the relative permittivity of the target. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

13 pages, 1980 KiB  
Article
SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion
by Baifan Chen, Xiaotian Lv, Chongliang Liu and Hao Jiao
Sensors 2022, 22(17), 6414; https://doi.org/10.3390/s22176414 - 25 Aug 2022
Cited by 1 | Viewed by 2052
Abstract
The depth completion task aims to generate a dense depth map from a sparse depth map and the corresponding RGB image. As a data preprocessing task, obtaining denser depth maps without affecting the real-time performance of downstream tasks is the challenge. In this [...] Read more.
The depth completion task aims to generate a dense depth map from a sparse depth map and the corresponding RGB image. As a data preprocessing task, obtaining denser depth maps without affecting the real-time performance of downstream tasks is the challenge. In this paper, we propose a lightweight depth completion network based on secondary guidance and spatial fusion named SGSNet. We design the image feature extraction module to better extract features from different scales between and within layers in parallel and to generate guidance features. Then, SGSNet uses the secondary guidance to complete the depth completion. The first guidance uses the lightweight guidance module to quickly guide LiDAR feature extraction with the texture features of RGB images. The second guidance uses the depth information completion module for sparse depth map feature completion and inputs it into the DA-CSPN++ module to complete the dense depth map re-guidance. By using a lightweight bootstrap module, the overall network runs ten times faster than the baseline. The overall network is relatively lightweight, up to thirty frames, which is sufficient to meet the speed needs of large SLAM and three-dimensional reconstruction for sensor data extraction. At the time of submission, the accuracy of the algorithm in SGSNet ranked first in the KITTI ranking of lightweight depth completion methods. It was 37.5% faster than the top published algorithms in the rank and was second in the full ranking. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

20 pages, 1434 KiB  
Article
EVtracker: An Event-Driven Spatiotemporal Method for Dynamic Object Tracking
by Shixiong Zhang, Wenmin Wang, Honglei Li and Shenyong Zhang
Sensors 2022, 22(16), 6090; https://doi.org/10.3390/s22166090 - 15 Aug 2022
Cited by 9 | Viewed by 3303
Abstract
An event camera is a novel bio-inspired sensor that effectively compensates for the shortcomings of current frame cameras, which include high latency, low dynamic range, motion blur, etc. Rather than capturing images at a fixed frame rate, an event camera produces an asynchronous [...] Read more.
An event camera is a novel bio-inspired sensor that effectively compensates for the shortcomings of current frame cameras, which include high latency, low dynamic range, motion blur, etc. Rather than capturing images at a fixed frame rate, an event camera produces an asynchronous signal by measuring the brightness change of each pixel. Consequently, an appropriate algorithm framework that can handle the unique data types of event-based vision is required. In this paper, we propose a dynamic object tracking framework using an event camera to achieve long-term stable tracking of event objects. One of the key novel features of our approach is to adopt an adaptive strategy that adjusts the spatiotemporal domain of event data. To achieve this, we reconstruct event images from high-speed asynchronous streaming data via online learning. Additionally, we apply the Siamese network to extract features from event data. In contrast to earlier models that only extract hand-crafted features, our method provides powerful feature description and a more flexible reconstruction strategy for event data. We assess our algorithm in three challenging scenarios: 6-DoF (six degrees of freedom), translation, and rotation. Unlike fixed cameras in traditional object tracking tasks, all three tracking scenarios involve the simultaneous violent rotation and shaking of both the camera and objects. Results from extensive experiments suggest that our proposed approach achieves superior accuracy and robustness compared to other state-of-the-art methods. Without reducing time efficiency, our novel method exhibits a 30% increase in accuracy over other recent models. Furthermore, results indicate that event cameras are capable of robust object tracking, which is a task that conventional cameras cannot adequately perform, especially for super-fast motion tracking and challenging lighting situations. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

22 pages, 10509 KiB  
Article
Accurate Spatial Positioning of Target Based on the Fusion of Uncalibrated Image and GNSS
by Binbin Liang, Songchen Han, Wei Li, Daoyong Fu, Ruliang He and Guoxin Huang
Remote Sens. 2022, 14(16), 3877; https://doi.org/10.3390/rs14163877 - 10 Aug 2022
Cited by 5 | Viewed by 2092
Abstract
The accurate spatial positioning of the target in a fixed camera image is a critical sensing technique. Conventional visual spatial positioning methods rely on tedious camera calibration and face great challenges in selecting the representative feature points to compute the position of the [...] Read more.
The accurate spatial positioning of the target in a fixed camera image is a critical sensing technique. Conventional visual spatial positioning methods rely on tedious camera calibration and face great challenges in selecting the representative feature points to compute the position of the target, especially when existing occlusion or in remote scenes. In order to avoid these deficiencies, this paper proposes a deep learning approach for accurate visual spatial positioning of the targets with the assistance of Global Navigation Satellite System (GNSS). It contains two stages: the first stage trains a hybrid supervised and unsupervised auto-encoder regression network offline to gain capability of regressing geolocation (longitude and latitude) directly from the fusion of image and GNSS, and learns an error scale factor to evaluate the regression error. The second stage firstly predicts regressed accurate geolocation online from the observed image and GNSS measurement, and then filters the predictive geolocation and the measured GNSS to output the optimal geolocation. The experimental results showed that the proposed approach increased the average positioning accuracy by 56.83%, 37.25%, 41.62% in a simulated scenario and 31.25%, 7.43%, 38.28% in a real-world scenario, compared with GNSS, the Interacting Multiple Model−Unscented Kalman Filters (IMM-UKF) and the supervised deep learning approach, respectively. Other improvements were also achieved in positioning stability, robustness, generalization, and performance in GNSS denied environments. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

25 pages, 29588 KiB  
Article
Evaluating the Forest Ecosystem through a Semi-Autonomous Quadruped Robot and a Hexacopter UAV
by Moad Idrissi, Ambreen Hussain, Bidushi Barua, Ahmed Osman, Raouf Abozariba, Adel Aneiba and Taufiq Asyhari
Sensors 2022, 22(15), 5497; https://doi.org/10.3390/s22155497 - 23 Jul 2022
Cited by 17 | Viewed by 4085
Abstract
Accurate and timely monitoring is imperative to the resilience of forests for economic growth and climate regulation. In the UK, forest management depends on citizen science to perform tedious and time-consuming data collection tasks. In this study, an unmanned aerial vehicle (UAV) equipped [...] Read more.
Accurate and timely monitoring is imperative to the resilience of forests for economic growth and climate regulation. In the UK, forest management depends on citizen science to perform tedious and time-consuming data collection tasks. In this study, an unmanned aerial vehicle (UAV) equipped with a light sensor and positioning capabilities is deployed to perform aerial surveying and to observe a series of forest health indicators (FHIs) which are inaccessible from the ground. However, many FHIs such as burrows and deadwood can only be observed from under the tree canopy. Hence, we take the initiative of employing a quadruped robot with an integrated camera as well as an external sensing platform (ESP) equipped with light and infrared cameras, computing, communication and power modules to observe these FHIs from the ground. The forest-monitoring time can be extended by reducing computation and conserving energy. Therefore, we analysed different versions of the YOLO object-detection algorithm in terms of accuracy, deployment and usability by the EXP to accomplish an extensive low-latency detection. In addition, we constructed a series of new datasets to train the YOLOv5x and YOLOv5s for recognising FHIs. Our results reveal that YOLOv5s is lightweight and easy to train for FHI detection while performing close to real-time, cost-effective and autonomous forest monitoring. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

25 pages, 16081 KiB  
Article
An SAR Ship Object Detection Algorithm Based on Feature Information Efficient Representation Network
by Jimin Yu, Tao Wu, Shangbo Zhou, Huilan Pan, Xin Zhang and Wei Zhang
Remote Sens. 2022, 14(14), 3489; https://doi.org/10.3390/rs14143489 - 21 Jul 2022
Cited by 16 | Viewed by 2834
Abstract
In the synthetic aperture radar (SAR) ship image, the target size is small and dense, the background is complex and changeable, the ship target is difficult to distinguish from the surrounding background, and there are many ship-like targets in the image. This makes [...] Read more.
In the synthetic aperture radar (SAR) ship image, the target size is small and dense, the background is complex and changeable, the ship target is difficult to distinguish from the surrounding background, and there are many ship-like targets in the image. This makes it difficult for deep-learning-based target detection algorithms to obtain effective feature information, resulting in missed and false detection. The effective expression of the feature information of the target to be detected is the key to the target detection algorithm. How to improve the clear expression of image feature information in the network has always been a difficult point. Aiming at the above problems, this paper proposes a new target detection algorithm, the feature information efficient representation network (FIERNet). The algorithm can extract better feature details, enhance network feature fusion and information expression, and improve model detection capabilities. First, the convolution transformer feature extraction (CTFE) module is proposed, and a convolution transformer feature extraction network (CTFENet) is built with this module as a feature extraction block. The network enables the model to obtain more accurate and comprehensive feature information, weakens the interference of invalid information, and improves the overall performance of the network. Second, a new effective feature information fusion (EFIF) module is proposed to enhance the transfer and fusion of the main information of feature maps. Finally, a new frame-decoding formula is proposed to further improve the coincidence between the predicted frame and the target frame and obtain more accurate picture information. Experiments show that the method achieves 94.14% and 92.01% mean precision (mAP) on SSDD and SAR-ship datasets, and it works well on large-scale SAR ship images. In addition, FIERNet greatly reduces the occurrence of missed detection and false detection in SAR ship detection. Compared to other state-of-the-art object detection algorithms, FIERNet outperforms them on various performance metrics on SAR images. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

22 pages, 3722 KiB  
Article
Trigger-Based K-Band Microwave Ranging System Thermal Control with Model-Free Learning Process
by Xiaoliang Wang, Hongxu Zhu, Qiang Shen, Shufan Wu, Nan Wang, Xuan Liu, Dengfeng Wang, Xingwang Zhong, Zhu Zhu and Christopher Damaren
Electronics 2022, 11(14), 2173; https://doi.org/10.3390/electronics11142173 - 11 Jul 2022
Cited by 1 | Viewed by 1620
Abstract
Micron-level accuracy K-band microwave ranging in space relies on the stability of the payload thermal control on-board; however, large quantities of thermal sensors and heating devices around the deployed instruments consume the precious inner communication resources of the central computer. Another problem arises, [...] Read more.
Micron-level accuracy K-band microwave ranging in space relies on the stability of the payload thermal control on-board; however, large quantities of thermal sensors and heating devices around the deployed instruments consume the precious inner communication resources of the central computer. Another problem arises, which is that the payload thermal protection environment can deteriorate gradually through years operating. In this paper, a new trigger-based thermal system controller design is proposed, with consideration of spaceborne communication burden reduction and actuator saturation, which guarantees stable temperature fluctuations of microwave payloads in space missions. The controller combines a nominal constant sampling PID inner loop and a trigger-based outer loop structure under constraints of heating device saturation. Moreover, an iterative model-free reinforcement learning process is adopted that can approximate the estimation of thermal dynamic modeling uncertainty online. Via extensive experiment in a laboratory environment, the performance of the proposed trigger thermal control is verified, with smaller temperature fluctuations compared to the nominal control, and obvious efficiency in system communications. The online learning algorithm is also tested with deliberate thermal conditions that deviate from the original system—the results can quickly converge to normal when the thermal disturbance is removed. Finally, the ranging accuracy is tested for the whole system, and a 25% (RMS) performance improvement can be realized by using a trigger-based control strategy—about 2.2 µm, compared to the nominal control method. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

16 pages, 802 KiB  
Technical Note
Optimal Sensor Placement Using Learning Models—A Mediterranean Case Study
by Hrvoje Kalinić, Leon Ćatipović and Frano Matić
Remote Sens. 2022, 14(13), 2989; https://doi.org/10.3390/rs14132989 - 22 Jun 2022
Cited by 4 | Viewed by 2367
Abstract
In this paper, we discuss different approaches to optimal sensor placement and propose that an optimal sensor location can be selected using unsupervised learning methods such as self-organising maps, neural gas or the K-means algorithm. We show how each of the algorithms can [...] Read more.
In this paper, we discuss different approaches to optimal sensor placement and propose that an optimal sensor location can be selected using unsupervised learning methods such as self-organising maps, neural gas or the K-means algorithm. We show how each of the algorithms can be used for this purpose and that additional constraints such as distance from shore, which is presumed to be related to deployment and maintenance costs, can be considered. The study uses wind data over the Mediterranean Sea and uses the reconstruction error to evaluate sensor location selection. The reconstruction error shows that results deteriorate when additional constraints are added to the equation. However, it is also shown that a small fraction of the data is sufficient to reconstruct wind data over a larger geographic area with an error comparable to that of a meteorological model. The results are confirmed by several experiments and are consistent with the results of previous studies. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

15 pages, 2560 KiB  
Article
DPSSD: Dual-Path Single-Shot Detector
by Dongri Shan, Yalu Xu, Peng Zhang, Xiaofang Wang, Dongmei He, Chenglong Zhang, Maohui Zhou and Guoqi Yu
Sensors 2022, 22(12), 4616; https://doi.org/10.3390/s22124616 - 18 Jun 2022
Cited by 2 | Viewed by 2232
Abstract
Object detection is one of the most important and challenging branches of computer vision. It has been widely used in people’s lives, such as for surveillance security and autonomous driving. We propose a novel dual-path multi-scale object detection paradigm in order to extract [...] Read more.
Object detection is one of the most important and challenging branches of computer vision. It has been widely used in people’s lives, such as for surveillance security and autonomous driving. We propose a novel dual-path multi-scale object detection paradigm in order to extract more abundant feature information for the object detection task and optimize the multi-scale object detection problem, and based on this, we design a single-stage general object detection algorithm called Dual-Path Single-Shot Detector (DPSSD). The dual path ensures that shallow features, i.e., residual path and concatenation path, can be more easily utilized to improve detection accuracy. Our improved dual-path network is more adaptable to multi-scale object detection tasks, and we combine it with the feature fusion module to generate a multi-scale feature learning paradigm called the “Dual-Path Feature Pyramid”. We trained the models on PASCAL VOC datasets and COCO datasets with 320 pixels and 512 pixels input, respectively, and performed inference experiments to validate the structures in the neural network. The experimental results show that our algorithm has an advantage over anchor-based single-stage object detection algorithms and achieves an advanced level in average accuracy. Researchers can replicate the reported results of this paper. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

17 pages, 5342 KiB  
Article
Encoder-Decoder Structure with Multiscale Receptive Field Block for Unsupervised Depth Estimation from Monocular Video
by Songnan Chen, Junyu Han, Mengxia Tang, Ruifang Dong and Jiangming Kan
Remote Sens. 2022, 14(12), 2906; https://doi.org/10.3390/rs14122906 - 17 Jun 2022
Cited by 2 | Viewed by 2498
Abstract
Monocular depth estimation is a fundamental yet challenging task in computer vision as depth information will be lost when 3D scenes are mapped to 2D images. Although deep learning-based methods have led to considerable improvements for this task in a single image, most [...] Read more.
Monocular depth estimation is a fundamental yet challenging task in computer vision as depth information will be lost when 3D scenes are mapped to 2D images. Although deep learning-based methods have led to considerable improvements for this task in a single image, most existing approaches still fail to overcome this limitation. Supervised learning methods model depth estimation as a regression problem and, as a result, require large amounts of ground truth depth data for training in actual scenarios. Unsupervised learning methods treat depth estimation as the synthesis of a new disparity map, which means that rectified stereo image pairs need to be used as the training dataset. Aiming to solve such problem, we present an encoder-decoder based framework, which infers depth maps from monocular video snippets in an unsupervised manner. First, we design an unsupervised learning scheme for the monocular depth estimation task based on the basic principles of structure from motion (SfM) and it only uses adjacent video clips rather than paired training data as supervision. Second, our method predicts two confidence masks to improve the robustness of the depth estimation model to avoid the occlusion problem. Finally, we leverage the largest scale and minimum depth loss instead of the multiscale and average loss to improve the accuracy of depth estimation. The experimental results on the benchmark KITTI dataset for depth estimation show that our method outperforms competing unsupervised methods. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

15 pages, 11301 KiB  
Communication
A Low-Power Analog Processor-in-Memory-Based Convolutional Neural Network for Biosensor Applications
by Sung-June Byun, Dong-Gyun Kim, Kyung-Do Park, Yeun-Jin Choi, Pervesh Kumar, Imran Ali, Dong-Gyu Kim, June-Mo Yoo, Hyung-Ki Huh, Yeon-Jae Jung, Seok-Kee Kim, Young-Gun Pu and Kang-Yoon Lee
Sensors 2022, 22(12), 4555; https://doi.org/10.3390/s22124555 - 16 Jun 2022
Cited by 8 | Viewed by 2642
Abstract
This paper presents an on-chip implementation of an analog processor-in-memory (PIM)-based convolutional neural network (CNN) in a biosensor. The operator was designed with low power to implement CNN as an on-chip device on the biosensor, which consists of plates of 32 × 32 [...] Read more.
This paper presents an on-chip implementation of an analog processor-in-memory (PIM)-based convolutional neural network (CNN) in a biosensor. The operator was designed with low power to implement CNN as an on-chip device on the biosensor, which consists of plates of 32 × 32 material. In this paper, 10T SRAM-based analog PIM, which performs multiple and average (MAV) operations with multiplication and accumulation (MAC), is used as a filter to implement CNN at low power. PIM proceeds with MAV operations, with feature extraction as a filter, using an analog method. To prepare the input feature, an input matrix is formed by scanning a 32 × 32 biosensor based on a digital controller operating at 32 MHz frequency. Memory reuse techniques were applied to the analog SRAM filter, which is the core of low power implementation, and in order to accurately grasp the MAC operational efficiency and classification, we modeled and trained numerous input features based on biosignal data, confirming the classification. When the learned weight data was input, 19 mW of power was consumed during analog-based MAC operation. The implementation showed an energy efficiency of 5.38 TOPS/W and was differentiated through the implementation of 8 bits of high resolution in the 180 nm CMOS process. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

17 pages, 2262 KiB  
Article
A Vision-Based System for Stage Classification of Parkinsonian Gait Using Machine Learning and Synthetic Data
by Jorge Marquez Chavez and Wei Tang
Sensors 2022, 22(12), 4463; https://doi.org/10.3390/s22124463 - 13 Jun 2022
Cited by 12 | Viewed by 3061
Abstract
Parkinson’s disease is characterized by abnormal gait, which worsens as the condition progresses. Although several methods have been able to classify this feature through pose-estimation algorithms and machine-learning classifiers, few studies have been able to analyze its progression to perform stage classification of [...] Read more.
Parkinson’s disease is characterized by abnormal gait, which worsens as the condition progresses. Although several methods have been able to classify this feature through pose-estimation algorithms and machine-learning classifiers, few studies have been able to analyze its progression to perform stage classification of the disease. Moreover, despite the increasing popularity of these systems for gait analysis, the amount of available gait-related data can often be limited, thereby, hindering the progress of the implementation of this technology in the medical field. As such, creating a quantitative prognosis method that can identify the severity levels of a Parkinsonian gait with little data could help facilitate the study of the Parkinsonian gait for rehabilitation. In this contribution, we propose a vision-based system to analyze the Parkinsonian gait at various stages using linear interpolation of Parkinsonian gait models. We present a comparison between the performance of a k-nearest neighbors algorithm (KNN), support-vector machine (SVM) and gradient boosting (GB) algorithms in classifying well-established gait features. Our results show that the proposed system achieved 96–99% accuracy in evaluating the prognosis of Parkinsonian gaits. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

12 pages, 3906 KiB  
Article
Path-Planning System for Radioisotope Identification Devices Using 4π Gamma Imaging Based on Random Forest Analysis
by Hideki Tomita, Shintaro Hara, Atsushi Mukai, Keita Yamagishi, Hidetake Ebi, Kenji Shimazoe, Yusuke Tamura, Hanwool Woo, Hiroyuki Takahashi, Hajime Asama, Fumihiko Ishida, Eiji Takada, Jun Kawarabayashi, Kosuke Tanabe and Kei Kamada
Sensors 2022, 22(12), 4325; https://doi.org/10.3390/s22124325 - 7 Jun 2022
Cited by 7 | Viewed by 2431
Abstract
We developed a path-planning system for radiation source identification devices using 4π gamma imaging. The estimated source location and activity were calculated by an integrated simulation model by using 4π gamma images at multiple measurement positions. Using these calculated values, a prediction model [...] Read more.
We developed a path-planning system for radiation source identification devices using 4π gamma imaging. The estimated source location and activity were calculated by an integrated simulation model by using 4π gamma images at multiple measurement positions. Using these calculated values, a prediction model to estimate the probability of identification at the next measurement position was created by via random forest analysis. The path-planning system based on the prediction model was verified by integrated simulation and experiment for a 137Cs point source. The results showed that 137Cs point sources were identified using the few measurement positions suggested by the path-planning system. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

17 pages, 7236 KiB  
Article
A Long Short-Term Memory Network for Plasma Diagnosis from Langmuir Probe Data
by Jin Wang, Wenzhu Ji, Qingfu Du, Zanyang Xing, Xinyao Xie and Qinghe Zhang
Sensors 2022, 22(11), 4281; https://doi.org/10.3390/s22114281 - 4 Jun 2022
Cited by 4 | Viewed by 2658
Abstract
Electrostatic probe diagnosis is the main method of plasma diagnosis. However, the traditional diagnosis theory is affected by many factors, and it is difficult to obtain accurate diagnosis results. In this study, a long short-term memory (LSTM) approach is used for plasma probe [...] Read more.
Electrostatic probe diagnosis is the main method of plasma diagnosis. However, the traditional diagnosis theory is affected by many factors, and it is difficult to obtain accurate diagnosis results. In this study, a long short-term memory (LSTM) approach is used for plasma probe diagnosis to derive electron density (Ne) and temperature (Te) more accurately and quickly. The LSTM network uses the data collected by Langmuir probes as input to eliminate the influence of the discharge device on the diagnosis that can be applied to a variety of discharge environments and even space ionospheric diagnosis. In the high-vacuum gas discharge environment, the Langmuir probe is used to obtain current–voltage (I–V) characteristic curves under different Ne and Te. A part of the data input network is selected for training, the other part of the data is used as the test set to test the network, and the parameters are adjusted to make the network obtain better prediction results. Two indexes, namely, mean squared error (MSE) and mean absolute percentage error (MAPE), are evaluated to calculate the prediction accuracy. The results show that using LSTM to diagnose plasma can reduce the impact of probe surface contamination on the traditional diagnosis methods and can accurately diagnose the underdense plasma. In addition, compared with Te, the Ne diagnosis result output by LSTM is more accurate. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

18 pages, 1401 KiB  
Article
RadArnomaly: Protecting Radar Systems from Data Manipulation Attacks
by Shai Cohen, Efrat Levy, Avi Shaked, Tair Cohen, Yuval Elovici and Asaf Shabtai
Sensors 2022, 22(11), 4259; https://doi.org/10.3390/s22114259 - 2 Jun 2022
Cited by 3 | Viewed by 3177
Abstract
Radar systems are mainly used for tracking aircraft, missiles, satellites, and watercraft. In many cases, information regarding the objects detected by a radar system is sent to, and used by, a peripheral consuming system, such as a missile system or a graphical user [...] Read more.
Radar systems are mainly used for tracking aircraft, missiles, satellites, and watercraft. In many cases, information regarding the objects detected by a radar system is sent to, and used by, a peripheral consuming system, such as a missile system or a graphical user interface used by an operator. Those systems process the data stream and make real-time operational decisions based on the data received. Given this, the reliability and availability of information provided by radar systems have grown in importance. Although the field of cyber security has been continuously evolving, no prior research has focused on anomaly detection in radar systems. In this paper, we present an unsupervised deep-learning-based method for detecting anomalies in radar system data streams; we take into consideration the fact that a data stream created by a radar system is heterogeneous, i.e., it contains both numerical and categorical features with non-linear and complex relationships. We propose a novel technique that learns the correlation between numerical features and an embedding representation of categorical features in an unsupervised manner. The proposed technique, which allows for the detection of the malicious manipulation of critical fields in a data stream, is complemented by a timing-interval anomaly-detection mechanism proposed for the detection of message-dropping attempts. Real radar system data were used to evaluate the proposed method. Our experiments demonstrated the method’s high detection accuracy on a variety of data-stream manipulation attacks (an average detection rate of 88% with a false -alarm rate of 1.59%) and message-dropping attacks (an average detection rate of 92% with a false-alarm rate of 2.2%). Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 1714 KiB  
Article
Transfer Learning for Sentiment Analysis Using BERT Based Supervised Fine-Tuning
by Nusrat Jahan Prottasha, Abdullah As Sami, Md Kowsher, Saydul Akbar Murad, Anupam Kumar Bairagi, Mehedi Masud and Mohammed Baz
Sensors 2022, 22(11), 4157; https://doi.org/10.3390/s22114157 - 30 May 2022
Cited by 109 | Viewed by 18346
Abstract
The growth of the Internet has expanded the amount of data expressed by users across multiple platforms. The availability of these different worldviews and individuals’ emotions empowers sentiment analysis. However, sentiment analysis becomes even more challenging due to a scarcity of standardized labeled [...] Read more.
The growth of the Internet has expanded the amount of data expressed by users across multiple platforms. The availability of these different worldviews and individuals’ emotions empowers sentiment analysis. However, sentiment analysis becomes even more challenging due to a scarcity of standardized labeled data in the Bangla NLP domain. The majority of the existing Bangla research has relied on models of deep learning that significantly focus on context-independent word embeddings, such as Word2Vec, GloVe, and fastText, in which each word has a fixed representation irrespective of its context. Meanwhile, context-based pre-trained language models such as BERT have recently revolutionized the state of natural language processing. In this work, we utilized BERT’s transfer learning ability to a deep integrated model CNN-BiLSTM for enhanced performance of decision-making in sentiment analysis. In addition, we also introduced the ability of transfer learning to classical machine learning algorithms for the performance comparison of CNN-BiLSTM. Additionally, we explore various word embedding techniques, such as Word2Vec, GloVe, and fastText, and compare their performance to the BERT transfer learning strategy. As a result, we have shown a state-of-the-art binary classification performance for Bangla sentiment analysis that significantly outperforms all embedding and algorithms. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

15 pages, 3188 KiB  
Article
Dual Projection Fusion for Reference-Based Image Super-Resolution
by Ruirong Lin and Nanfeng Xiao
Sensors 2022, 22(11), 4119; https://doi.org/10.3390/s22114119 - 28 May 2022
Cited by 2 | Viewed by 2263
Abstract
Reference-based image super-resolution (RefSR) methods have achieved performance superior to that of single image super-resolution (SISR) methods by transferring texture details from an additional high-resolution (HR) reference image to the low-resolution (LR) image. However, existing RefSR methods simply add or concatenate the transferred [...] Read more.
Reference-based image super-resolution (RefSR) methods have achieved performance superior to that of single image super-resolution (SISR) methods by transferring texture details from an additional high-resolution (HR) reference image to the low-resolution (LR) image. However, existing RefSR methods simply add or concatenate the transferred texture feature with the LR features, which cannot effectively fuse the information of these two independently extracted features. Therefore, this paper proposes a dual projection fusion for reference-based image super-resolution (DPFSR), which enables the network to focus more on the different information between feature sources through inter-residual projection operations, ensuring effective filling of detailed information in the LR feature. Moreover, this paper also proposes a novel backbone called the deep channel attention connection network (DCACN), which is capable of extracting valuable high-frequency components from the LR space to further facilitate the effectiveness of image reconstruction. Experimental results show that we achieve the best peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) performance compared with the state-of-the-art (SOTA) SISR and RefSR methods. Visual results demonstrate that the proposed method in this paper recovers more natural and realistic texture details. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

17 pages, 13292 KiB  
Article
Deep Learning Regression Approaches Applied to Estimate Tillering in Tropical Forages Using Mobile Phone Images
by Luiz Santos, José Marcato Junior, Pedro Zamboni, Mateus Santos, Liana Jank, Edilene Campos and Edson Takashi Matsubara
Sensors 2022, 22(11), 4116; https://doi.org/10.3390/s22114116 - 28 May 2022
Cited by 2 | Viewed by 2953
Abstract
We assessed the performance of Convolutional Neural Network (CNN)-based approaches using mobile phone images to estimate regrowth density in tropical forages. We generated a dataset composed of 1124 labeled images with 2 mobile phones 7 days after the harvest of the forage plants. [...] Read more.
We assessed the performance of Convolutional Neural Network (CNN)-based approaches using mobile phone images to estimate regrowth density in tropical forages. We generated a dataset composed of 1124 labeled images with 2 mobile phones 7 days after the harvest of the forage plants. Six architectures were evaluated, including AlexNet, ResNet (18, 34, and 50 layers), ResNeXt101, and DarkNet. The best regression model showed a mean absolute error of 7.70 and a correlation of 0.89. Our findings suggest that our proposal using deep learning on mobile phone images can successfully be used to estimate regrowth density in forages. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 4494 KiB  
Article
TimeREISE: Time Series Randomized Evolving Input Sample Explanation
by Dominique Mercier, Andreas Dengel and Sheraz Ahmed
Sensors 2022, 22(11), 4084; https://doi.org/10.3390/s22114084 - 27 May 2022
Cited by 4 | Viewed by 2245
Abstract
Deep neural networks are one of the most successful classifiers across different domains. However, their use is limited in safety-critical areas due to their limitations concerning interpretability. The research field of explainable artificial intelligence addresses this problem. However, most interpretability methods align to [...] Read more.
Deep neural networks are one of the most successful classifiers across different domains. However, their use is limited in safety-critical areas due to their limitations concerning interpretability. The research field of explainable artificial intelligence addresses this problem. However, most interpretability methods align to the imaging modality by design. The paper introduces TimeREISE, a model agnostic attribution method that shows success in the context of time series classification. The method applies perturbations to the input and considers different attribution map characteristics such as the granularity and density of an attribution map. The approach demonstrates superior performance compared to existing methods concerning different well-established measurements. TimeREISE shows impressive results in the deletion and insertion test, Infidelity, and Sensitivity. Concerning the continuity of an explanation, it showed superior performance while preserving the correctness of the attribution map. Additional sanity checks prove the correctness of the approach and its dependency on the model parameters. TimeREISE scales well with an increasing number of channels and timesteps. TimeREISE applies to any time series classification network and does not rely on prior data knowledge. TimeREISE is suited for any usecase independent of dataset characteristics such as sequence length, channel number, and number of classes. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

39 pages, 1219 KiB  
Review
Recent Trends in AI-Based Intelligent Sensing
by Abhishek Sharma, Vaidehi Sharma, Mohita Jaiswal, Hwang-Cheng Wang, Dushantha Nalin K. Jayakody, Chathuranga M. Wijerathna Basnayaka and Ammar Muthanna
Electronics 2022, 11(10), 1661; https://doi.org/10.3390/electronics11101661 - 23 May 2022
Cited by 17 | Viewed by 10694
Abstract
In recent years, intelligent sensing has gained significant attention because of its autonomous decision-making ability to solve complex problems. Today, smart sensors complement and enhance the capabilities of human beings and have been widely embraced in numerous application areas. Artificial intelligence (AI) has [...] Read more.
In recent years, intelligent sensing has gained significant attention because of its autonomous decision-making ability to solve complex problems. Today, smart sensors complement and enhance the capabilities of human beings and have been widely embraced in numerous application areas. Artificial intelligence (AI) has made astounding growth in domains of natural language processing, machine learning (ML), and computer vision. The methods based on AI enable a computer to learn and monitor activities by sensing the source of information in a real-time environment. The combination of these two technologies provides a promising solution in intelligent sensing. This survey provides a comprehensive summary of recent research on AI-based algorithms for intelligent sensing. This work also presents a comparative analysis of algorithms, models, influential parameters, available datasets, applications and projects in the area of intelligent sensing. Furthermore, we present a taxonomy of AI models along with the cutting edge approaches. Finally, we highlight challenges and open issues, followed by the future research directions pertaining to this exciting and fast-moving field. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

30 pages, 23011 KiB  
Article
TCSPANet: Two-Staged Contrastive Learning and Sub-Patch Attention Based Network for PolSAR Image Classification
by Yuanhao Cui, Fang Liu, Xu Liu, Lingling Li and Xiaoxue Qian
Remote Sens. 2022, 14(10), 2451; https://doi.org/10.3390/rs14102451 - 20 May 2022
Cited by 11 | Viewed by 2390
Abstract
Polarimetric synthetic aperture radar (PolSAR) image classification has achieved great progress, but there still exist some obstacles. On the one hand, a large amount of PolSAR data is captured. Nevertheless, most of them are not labeled with land cover categories, which cannot be [...] Read more.
Polarimetric synthetic aperture radar (PolSAR) image classification has achieved great progress, but there still exist some obstacles. On the one hand, a large amount of PolSAR data is captured. Nevertheless, most of them are not labeled with land cover categories, which cannot be fully utilized. On the other hand, annotating PolSAR images relies more on domain knowledge and manpower, which makes pixel-level annotation harder. To alleviate the above problems, by integrating contrastive learning and transformer, we propose a novel patch-level PolSAR image classification, i.e., two-staged contrastive learning and sub-patch attention based network (TCSPANet). Firstly, the two-staged contrastive learning based network (TCNet) is designed for learning the representation information of PolSAR images without supervision, and obtaining the discrimination and comparability for actual land covers. Then, resorting to transformer, we construct the sub-patch attention encoder (SPAE) for modelling the context within patch samples. For training the TCSPANet, two patch-level datasets are built up based on unsupervised and semi-supervised methods. When predicting, the classification algorithm, classifying or splitting, is put forward to realise non-overlapping and coarse-to-fine patch-level classification. The classification results of multi-PolSAR images with one trained model suggests that our proposed model is superior to the compared methods. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

41 pages, 5227 KiB  
Review
Deep Learning-Based Object Detection Techniques for Remote Sensing Images: A Survey
by Zheng Li, Yongcheng Wang, Ning Zhang, Yuxi Zhang, Zhikang Zhao, Dongdong Xu, Guangli Ben and Yunxiao Gao
Remote Sens. 2022, 14(10), 2385; https://doi.org/10.3390/rs14102385 - 16 May 2022
Cited by 83 | Viewed by 12547
Abstract
Object detection in remote sensing images (RSIs) requires the locating and classifying of objects of interest, which is a hot topic in RSI analysis research. With the development of deep learning (DL) technology, which has accelerated in recent years, numerous intelligent and efficient [...] Read more.
Object detection in remote sensing images (RSIs) requires the locating and classifying of objects of interest, which is a hot topic in RSI analysis research. With the development of deep learning (DL) technology, which has accelerated in recent years, numerous intelligent and efficient detection algorithms have been proposed. Meanwhile, the performance of remote sensing imaging hardware has also evolved significantly. The detection technology used with high-resolution RSIs has been pushed to unprecedented heights, making important contributions in practical applications such as urban detection, building planning, and disaster prediction. However, although some scholars have authored reviews on DL-based object detection systems, the leading DL-based object detection improvement strategies have never been summarized in detail. In this paper, we first briefly review the recent history of remote sensing object detection (RSOD) techniques, including traditional methods as well as DL-based methods. Then, we systematically summarize the procedures used in DL-based detection algorithms. Most importantly, starting from the problems of complex object features, complex background information, tedious sample annotation that will be faced by high-resolution RSI object detection, we introduce a taxonomy based on various detection methods, which focuses on summarizing and classifying the existing attention mechanisms, multi-scale feature fusion, super-resolution and other major improvement strategies. We also introduce recognized open-source remote sensing detection benchmarks and evaluation metrics. Finally, based on the current state of the technology, we conclude by discussing the challenges and potential trends in the field of RSOD in order to provide a reference for researchers who have just entered the field. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

20 pages, 44887 KiB  
Article
Extraction of Micro-Doppler Feature Using LMD Algorithm Combined Supplement Feature for UAVs and Birds Classification
by Ting Dai, Shiyou Xu, Biao Tian, Jun Hu, Yue Zhang and Zengping Chen
Remote Sens. 2022, 14(9), 2196; https://doi.org/10.3390/rs14092196 - 4 May 2022
Cited by 9 | Viewed by 3029
Abstract
In the past few decades, the demand for reliable and robust systems capable of monitoring unmanned aerial vehicles (UAVs) increased significantly due to the security threats from its wide applications. During UAVs surveillance, birds are a typical confuser target. Therefore, discriminating UAVs from [...] Read more.
In the past few decades, the demand for reliable and robust systems capable of monitoring unmanned aerial vehicles (UAVs) increased significantly due to the security threats from its wide applications. During UAVs surveillance, birds are a typical confuser target. Therefore, discriminating UAVs from birds is critical for successful non-cooperative UAVs surveillance. Micro-Doppler signature (m-DS) reflects the scattering characteristics of micro-motion targets and has been utilized for many radar automatic target recognition (RATR) tasks. In this paper, the authors deploy local mean decomposition (LMD) to separate the m-DS of the micro-motion parts from the body returns of the UAVs and birds. After the separation, rotating parts will be obtained without the interference of the body components, and the m-DS features can also be revealed more clearly, which is conducive to feature extraction. What is more, there are some problems in using m-DS only for target classification. Firstly, extracting only m-DS features makes incomplete use of information in the spectrogram. Secondly, m-DS can be observed only for metal rotor UAVs, or large UAVs when they are closer to the radar. Lastly, m-DS cannot be observed when the size of the birds is small, or when it is gliding. The authors thus propose an algorithm for RATR of UAVs and interfering targets under a new system of L band staring radar. In this algorithm, to make full use of the information in the spectrogram and supplement the information in exceptional situations, m-DS, movement, and energy aggregation features of the target are extracted from the spectrogram. On the benchmark dataset, the proposed algorithm demonstrates a better performance than the state-of-the-art algorithms. More specifically, the equal error rate (EER) proposed is 2.56% lower than the existing methods, which demonstrates the effectiveness of the proposed algorithm. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

29 pages, 5556 KiB  
Article
Electrocardiogram Biometrics Using Transformer’s Self-Attention Mechanism for Sequence Pair Feature Extractor and Flexible Enrollment Scope Identification
by Kai Jye Chee and Dzati Athiar Ramli
Sensors 2022, 22(9), 3446; https://doi.org/10.3390/s22093446 - 30 Apr 2022
Cited by 10 | Viewed by 3566
Abstract
The existing electrocardiogram (ECG) biometrics do not perform well when ECG changes after the enrollment phase because the feature extraction is not able to relate ECG collected during enrollment and ECG collected during classification. In this research, we propose the sequence pair feature [...] Read more.
The existing electrocardiogram (ECG) biometrics do not perform well when ECG changes after the enrollment phase because the feature extraction is not able to relate ECG collected during enrollment and ECG collected during classification. In this research, we propose the sequence pair feature extractor, inspired by Bidirectional Encoder Representations from Transformers (BERT)’s sentence pair task, to obtain a dynamic representation of a pair of ECGs. We also propose using the self-attention mechanism of the transformer to draw an inter-identity relationship when performing ECG identification tasks. The model was trained once with datasets built from 10 ECG databases, and then, it was applied to six other ECG databases without retraining. We emphasize the significance of the time separation between enrollment and classification when presenting the results. The model scored 96.20%, 100.0%, 99.91%, 96.09%, 96.35%, and 98.10% identification accuracy on MIT-BIH Atrial Fibrillation Database (AFDB), Combined measurement of ECG, Breathing and Seismocardiograms (CEBSDB), MIT-BIH Normal Sinus Rhythm Database (NSRDB), MIT-BIH ST Change Database (STDB), ECG-ID Database (ECGIDDB), and PTB Diagnostic ECG Database (PTBDB), respectively, over a short time separation. The model scored 92.70% and 64.16% identification accuracy on ECGIDDB and PTBDB, respectively, over a long time separation, which is a significant improvement compared to state-of-the-art methods. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 3253 KiB  
Article
Convolutional Neural Network-Based Radar Antenna Scanning Period Recognition
by Bin Wang, Shunan Wang, Dan Zeng and Min Wang
Electronics 2022, 11(9), 1383; https://doi.org/10.3390/electronics11091383 - 26 Apr 2022
Cited by 4 | Viewed by 3139
Abstract
The antenna scanning period (ASP) of radar is a crucial parameter in electronic warfare (EW) which is used in many applications, such as radar work pattern recognition and emitter recognition. For antennas of radars and EW systems, which perform scanning circularly, the method [...] Read more.
The antenna scanning period (ASP) of radar is a crucial parameter in electronic warfare (EW) which is used in many applications, such as radar work pattern recognition and emitter recognition. For antennas of radars and EW systems, which perform scanning circularly, the method based on threshold measurement is invalid. To overcome this shortcoming, this study proposes a method using the convolutional neural network (CNN) to recognize the ASP of radar under the condition that antennas of the radar and EW system both scan circularly. A system model is constructed, and factors affecting the received signal power are analyzed. A CNN model for rapid and accurate ASP radar classification is developed. A large number of received signal time–power images of three separate ASPs are used for the training and testing of the developed model under different experimental conditions. Numerical experiment results and performance comparison demonstrate high classification accuracy and effectiveness of the proposed method in the condition that antennas of radar and EW system are circular scan, where the average recognition accuracy for radar ASP is at least 90% when the signal to-noise ratio (SNR) is not less than 30 dB, which is significantly higher than the recognition accuracy of NAC and AFT methods based on adaptive threshold detection. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

17 pages, 4965 KiB  
Article
A Frame-to-Frame Scan Matching Algorithm for 2D Lidar Based on Attention
by Shan Huang and Hong-Zhong Huang
Appl. Sci. 2022, 12(9), 4341; https://doi.org/10.3390/app12094341 - 25 Apr 2022
Cited by 3 | Viewed by 3047
Abstract
The frame-to-frame scan matching algorithm is the most basic robot localization and mapping module and has a huge impact on the accuracy of localization and mapping tasks. To achieve high-precision localization and mapping, we propose a 2D lidar frame-to-frame scanning matching algorithm based [...] Read more.
The frame-to-frame scan matching algorithm is the most basic robot localization and mapping module and has a huge impact on the accuracy of localization and mapping tasks. To achieve high-precision localization and mapping, we propose a 2D lidar frame-to-frame scanning matching algorithm based on an attention mechanism called ASM (Attention-based Scan Matching). Inspired by human navigation, we use a heuristic attention selection mechanism that only considers the areas covered by the robot’s attention while ignoring other areas when performing frame-to-frame scan matching tasks to achieve a similar performance as landmark-based localization. The selected landmark is not switched to another one before it becomes invisible; thus, the ASM cannot accumulate errors during the life cycle of a landmark, and the errors will only increase when the landmark switches. Ideally, the errors accumulate every time the robot moves the distance of the lidar sensing range, so the ASM algorithm can achieve high matching accuracy. On the other hand, the number of involved data during scan matching applications is small compared to the total number of data due to the attention mechanism; as a result, the ASM algorithm has high computational efficiency. In order to prove the effectiveness of the ASM algorithm, we conducted experiments on four datasets. The experimental results show that compared to current methods, ASM can achieve higher matching accuracy and speed. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

15 pages, 2527 KiB  
Article
Prediction of Upper Limb Action Intention Based on Long Short-Term Memory Neural Network
by Jianwei Cui and Zhigang Li
Electronics 2022, 11(9), 1320; https://doi.org/10.3390/electronics11091320 - 21 Apr 2022
Cited by 5 | Viewed by 2036
Abstract
The use of an inertial measurement unit (IMU) to measure the motion data of the upper limb is a mature method, and the IMU has gradually become an important device for obtaining information sources to control assistive prosthetic hands. However, the control method [...] Read more.
The use of an inertial measurement unit (IMU) to measure the motion data of the upper limb is a mature method, and the IMU has gradually become an important device for obtaining information sources to control assistive prosthetic hands. However, the control method of the assistive prosthetic hand based on the IMU often has problems with high delay. Therefore, this paper proposes a method for predicting the action intentions of upper limbs based on a long short-term memory (LSTM) neural network. First, the degree of correlation between palm movement and arm movement is compared, and the Pearson correlation coefficient is calculated. The correlation coefficients are all greater than 0.6, indicating that there is a strong correlation between palm movement and arm movement. Then, the motion state of the upper limb is divided into the acceleration state, deceleration state and rest state. The rest state of the upper limb is used as a sign to control the assistive prosthetic hand. Using the LSTM to identify the motion state of the upper limb, the accuracy rate is 99%. When predicting the action intention of the upper limb based on the angular velocity of the shoulder and forearm, the LSTM is used to predict the angular velocity of the palm, and the average prediction error of palm motion is 1.5 rad/s. Finally, the feasibility of the method is verified through experiments, in the form of holding an assistive prosthetic hand to imitate a disabled person wearing a prosthesis. The assistive prosthetic hand is used to reproduce foot actions, and the average delay time of foot action was 0.65 s, which was measured by using the method based on the LSTM neural network. However, the average delay time of the manipulator control method based on threshold analysis is 1.35 s. Our experiments show that the prediction method based on the LSTM can achieve low prediction error and delay. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

17 pages, 3194 KiB  
Article
Redundancy Reduction for Sensor Deployment in Prosthetic Socket: A Case Study
by Wenyao Zhu, Yizhi Chen, Siu-Teing Ko and Zhonghai Lu
Sensors 2022, 22(9), 3103; https://doi.org/10.3390/s22093103 - 19 Apr 2022
Cited by 2 | Viewed by 2824
Abstract
The irregular pressure exerted by a prosthetic socket over the residual limb is one of the major factors that cause the discomfort of amputees using artificial limbs. By deploying the wearable sensors inside the socket, the interfacial pressure distribution can be studied to [...] Read more.
The irregular pressure exerted by a prosthetic socket over the residual limb is one of the major factors that cause the discomfort of amputees using artificial limbs. By deploying the wearable sensors inside the socket, the interfacial pressure distribution can be studied to find the active regions and rectify the socket design. In this case study, a clustering-based analysis method is presented to evaluate the density and layout of these sensors, which aims to reduce the local redundancy of the sensor deployment. In particular, a Self-Organizing Map (SOM) and K-means algorithm are employed to find the clustering results of the sensor data, taking the pressure measurement of a predefined sensor placement as the input. Then, one suitable clustering result is selected to detect the layout redundancy from the input area. After that, the Pearson correlation coefficient (PCC) is used as a similarity metric to guide the removal of redundant sensors and generate a new sparser layout. The Jenson–Shannon Divergence (JSD) and the mean pressure are applied as posterior validation metrics that compare the pressure features before and after sensor removal. A case study of a clinical trial with two sensor strips is used to prove the utility of the clustering-based analysis method. The sensors on the posterior and medial regions are suggested to be reduced, and the main pressure features are kept. The proposed method can help sensor designers optimize sensor configurations for intra-socket measurements and thus assist the prosthetists in improving the socket fitting. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

25 pages, 7258 KiB  
Article
Development of a Soft Sensor for Flow Estimation in Water Supply Systems Using Artificial Neural Networks
by Robson Pacífico Guimarães Lima, Juan Moises Mauricio Villanueva, Heber Pimentel Gomes and Thommas Kevin Sales Flores
Sensors 2022, 22(8), 3084; https://doi.org/10.3390/s22083084 - 18 Apr 2022
Cited by 9 | Viewed by 3208
Abstract
A water supply system is considered an essential service to the population as it is about providing an essential good for life. This system typically consists of several sensors, transducers, pumps, etc., and some of these elements have high costs and/or complex installation. [...] Read more.
A water supply system is considered an essential service to the population as it is about providing an essential good for life. This system typically consists of several sensors, transducers, pumps, etc., and some of these elements have high costs and/or complex installation. The indirect measurement of a quantity can be used to obtain a desired variable, dispensing with the use of a specific sensor in the plant. Among the contributions of this technique is the design of the pressure controller using the adaptive control, as well as the use of an artificial neural network for the construction of nonlinear models using inherent system parameters such as pressure, engine rotation frequency and control valve angle, with the purpose of estimating the flow. Among the various contributions of the research, we can highlight the suppression in the acquisition of physical flow meters, the elimination of physical installation and others. The validation was carried out through tests in an experimental bench located in the Laboratory of Energy and Hydraulic Efficiency in Sanitation of the Federal University of Paraiba. The results of the soft sensor were compared with those of an electromagnetic flux sensor, obtaining a maximum error of 10%. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

25 pages, 27343 KiB  
Article
Application of an Improved YOLOv5 Algorithm in Real-Time Detection of Foreign Objects by Ground Penetrating Radar
by Zhi Qiu, Zuoxi Zhao, Shaoji Chen, Junyuan Zeng, Yuan Huang and Borui Xiang
Remote Sens. 2022, 14(8), 1895; https://doi.org/10.3390/rs14081895 - 14 Apr 2022
Cited by 43 | Viewed by 6820
Abstract
Ground penetrating radar (GPR) detection is a popular technology in civil engineering. Because of its advantages of non-destructive testing (NDT) and high work efficiency, GPR is widely used to detect hard foreign objects in soil. However, the interpretation of GPR images relies heavily [...] Read more.
Ground penetrating radar (GPR) detection is a popular technology in civil engineering. Because of its advantages of non-destructive testing (NDT) and high work efficiency, GPR is widely used to detect hard foreign objects in soil. However, the interpretation of GPR images relies heavily on the work experience of researchers, which may lead to problems of low detection efficiency and a high false recognition rate. Therefore, this paper proposes a real-time detection technology of GPR based on deep learning for the application of soil foreign object detection. In this study, the GPR image signal is obtained in real time by the GPR instrument and software, and the image signals are preprocessed to improve the signal-to-noise ratio of the GPR image signals and improve the image quality. Then, in view of the problem that YOLOv5 poorly detects small targets, this study improves the problems of false detection and missed detection in real-time GPR detection by improving the network structure of YOLOv5, adding an attention mechanism, data enhancement, and other means. Finally, by establishing a regression equation for the position information of the ground penetrating radar, the precise localization of the foreign matter in the underground soil is realized. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

19 pages, 3546 KiB  
Article
Exploiting Graph and Geodesic Distance Constraint for Deep Learning-Based Visual Odometry
by Xu Fang, Qing Li, Qingquan Li, Kai Ding and Jiasong Zhu
Remote Sens. 2022, 14(8), 1854; https://doi.org/10.3390/rs14081854 - 12 Apr 2022
Cited by 4 | Viewed by 2644
Abstract
Visual odometry is the task of estimating the trajectory of the moving agents from consecutive images. It is a hot research topic both in robotic and computer vision communities and facilitates many applications, such as autonomous driving and virtual reality. The conventional odometry [...] Read more.
Visual odometry is the task of estimating the trajectory of the moving agents from consecutive images. It is a hot research topic both in robotic and computer vision communities and facilitates many applications, such as autonomous driving and virtual reality. The conventional odometry methods predict the trajectory by utilizing the multiple view geometry between consecutive overlapping images. However, these methods need to be carefully designed and fine-tuned to work well in different environments. Deep learning has been explored to alleviate the challenge by directly predicting the relative pose from the paired images. Deep learning-based methods usually focus on the consecutive images that are feasible to propagate the error over time. In this paper, graph loss and geodesic rotation loss are proposed to enhance deep learning-based visual odometry methods based on graph constraints and geodesic distance, respectively. The graph loss not only considers the relative pose loss of consecutive images, but also the relative pose of non-consecutive images. The relative pose of non-consecutive images is not directly predicted but computed from the relative pose of consecutive ones. The geodesic rotation loss is constructed by the geodesic distance and the model regresses a Lie algebra so(3) (3D vector). This allows a robust and stable convergence. To increase the efficiency, a random strategy is adopted to select the edges of the graph instead of using all of the edges. This strategy provides additional regularization for training the networks. Extensive experiments are conducted on visual odometry benchmarks, and the obtained results demonstrate that the proposed method has comparable performance to other supervised learning-based methods, as well as monocular camera-based methods. The source code and the weight are made publicly available. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

20 pages, 6002 KiB  
Article
Multi-Agent Deep Q Network to Enhance the Reinforcement Learning for Delayed Reward System
by Keecheon Kim
Appl. Sci. 2022, 12(7), 3520; https://doi.org/10.3390/app12073520 - 30 Mar 2022
Cited by 10 | Viewed by 4793
Abstract
This study examines various factors and conditions that are related with the performance of reinforcement learning, and defines a multi-agent DQN system (N-DQN) model to improve them. N-DQN model is implemented in this paper with examples of maze finding and ping-pong as examples [...] Read more.
This study examines various factors and conditions that are related with the performance of reinforcement learning, and defines a multi-agent DQN system (N-DQN) model to improve them. N-DQN model is implemented in this paper with examples of maze finding and ping-pong as examples of delayed reward system, where delayed reward occurs, which makes general DQN learning difficult to apply. The implemented N-DQN shows about 3.5 times higher learning performance compared to the Q-Learning algorithm in the reward-sparse environment in the performance evaluation, and compared to DQN, it shows about 1.1 times faster goal achievement speed. In addition, through the implementation of the prioritized experience replay and the implementation of the reward acquisition section segmentation policy, such a problem as positive-bias of the existing reinforcement learning models seldom or never occurred. However, according to the characteristics of the architecture that uses many numbers of actors in parallel, the need for additional research on light-weighting the system for further performance improvement has raised. This paper describes in detail the structure of the proposed multi-agent N_DQN architecture, the contents of various algorithms used, and the specification for its implementation. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

17 pages, 5233 KiB  
Article
A Novel Framework for Open-Set Authentication of Internet of Things Using Limited Devices
by Keju Huang, Junan Yang, Pengjiang Hu and Hui Liu
Sensors 2022, 22(7), 2662; https://doi.org/10.3390/s22072662 - 30 Mar 2022
Cited by 5 | Viewed by 2095
Abstract
The Internet of Things (IoT) is promising to transform a wide range of fields. However, the open nature of IoT makes it exposed to cybersecurity threats, among which identity spoofing is a typical example. Physical layer authentication, which identifies IoT devices based on [...] Read more.
The Internet of Things (IoT) is promising to transform a wide range of fields. However, the open nature of IoT makes it exposed to cybersecurity threats, among which identity spoofing is a typical example. Physical layer authentication, which identifies IoT devices based on the physical layer characteristics of signals, serves as an effective way to counteract identity spoofing. In this paper, we propose a deep learning-based framework for the open-set authentication of IoT devices. Specifically, additive angular margin softmax (AAMSoftmax) was utilized to enhance the discriminability of learned features and a modified OpenMAX classifier was employed to adaptively identify authorized devices and distinguish unauthorized ones. The experimental results for both simulated data and real ADS–B (Automatic Dependent Surveillance–Broadcast) data indicate that our framework achieved superior performance compared to current approaches, especially when the number of devices used for training is limited. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

24 pages, 8432 KiB  
Article
Road Speed Prediction Scheme by Analyzing Road Environment Data
by Jongtae Lim, Songhee Park, Dojin Choi, Kyoungsoo Bok and Jaesoo Yoo
Sensors 2022, 22(7), 2606; https://doi.org/10.3390/s22072606 - 29 Mar 2022
Cited by 2 | Viewed by 2661
Abstract
Road speed is an important indicator of traffic congestion. Therefore, the occurrence of traffic congestion can be reduced by predicting road speed because predicted road speed can be provided to users to distribute traffic. Traffic congestion prediction techniques can provide alternative routes to [...] Read more.
Road speed is an important indicator of traffic congestion. Therefore, the occurrence of traffic congestion can be reduced by predicting road speed because predicted road speed can be provided to users to distribute traffic. Traffic congestion prediction techniques can provide alternative routes to users in advance to help them avoid traffic jams. In this paper, we propose a machine-learning-based road speed prediction scheme using road environment data analysis. The proposed scheme uses not only the speed data of the target road, but also the speed data of neighboring roads that can affect the speed of the target road. Furthermore, the proposed scheme can accurately predict both the average road speed and rapidly changing road speeds. The proposed scheme uses historical average speed data from the target road organized by the day of the week and hour to reflect the average traffic flow on the road. Additionally, the proposed scheme analyzes speed changes in sections where the road speed changes rapidly to reflect traffic flows. Road speeds may change rapidly as a result of unexpected events such as accidents, disasters, and construction work. The proposed scheme predicts final road speeds by applying historical road speeds and events as weights for road speed prediction. It also considers weather conditions. The proposed scheme uses long short-term memory (LSTM), which is suitable for sequential data learning, as a machine learning algorithm for speed prediction. The proposed scheme can predict road speeds in 30 min by using weather data and speed data from the target and neighboring roads as input data. We demonstrate the capabilities of the proposed scheme through various performance evaluations. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

17 pages, 14652 KiB  
Article
Condition Monitoring of Ball Bearings Based on Machine Learning with Synthetically Generated Data
by Matthias Kahr, Gabor Kovács, Markus Loinig and Hubert Brückl
Sensors 2022, 22(7), 2490; https://doi.org/10.3390/s22072490 - 24 Mar 2022
Cited by 18 | Viewed by 6240
Abstract
Rolling element bearing faults significantly contribute to overall machine failures, which demand different strategies for condition monitoring and failure detection. Recent advancements in machine learning even further expedite the quest to improve accuracy in fault detection for economic purposes by minimizing scheduled maintenance. [...] Read more.
Rolling element bearing faults significantly contribute to overall machine failures, which demand different strategies for condition monitoring and failure detection. Recent advancements in machine learning even further expedite the quest to improve accuracy in fault detection for economic purposes by minimizing scheduled maintenance. Challenging tasks, such as the gathering of high quality data to explicitly train an algorithm, still persist and are limited in terms of the availability of historical data. In addition, failure data from measurements are typically valid only for the particular machinery components and their settings. In this study, 3D multi-body simulations of a roller bearing with different faults have been conducted to create a variety of synthetic training data for a deep learning convolutional neural network (CNN) and, hence, to address these challenges. The vibration data from the simulation are superimposed with noise collected from the measurement of a healthy bearing and are subsequently converted into a 2D image via wavelet transformation before being fed into the CNN for training. Measurements of damaged bearings are used to validate the algorithm’s performance. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 5202 KiB  
Article
Classification of Tree Species in Different Seasons and Regions Based on Leaf Hyperspectral Images
by Rongchao Yang and Jiangming Kan
Remote Sens. 2022, 14(6), 1524; https://doi.org/10.3390/rs14061524 - 21 Mar 2022
Cited by 5 | Viewed by 2800
Abstract
This paper aims to establish a tree species identification model suitable for different seasons and regions based on leaf hyperspectral images, and to mine a more effective hyperspectral identification algorithm. Firstly, the reflectance spectra of leaves in different seasons and regions were analyzed. [...] Read more.
This paper aims to establish a tree species identification model suitable for different seasons and regions based on leaf hyperspectral images, and to mine a more effective hyperspectral identification algorithm. Firstly, the reflectance spectra of leaves in different seasons and regions were analyzed. Then, to solve the problem that 0-element in sparse random (SR) coding matrices affects the classification performance of error-correcting output codes (ECOC), two versions of supervision-mechanism-based ECOC algorithms, namely SM-ECOC-V1 and SM-ECOC-V2, were proposed in this paper. In addition, the performance of the proposed algorithms was compared with that of six traditional algorithms based on all bands and feature bands. The experiment results show that seasonal and regional changes have an effect on the reflectance spectra of leaves, especially in the near-infrared region of 760–1000 nm. When the spectral information of different seasons and different regions is added into the identification model, tree species can be effectively classified. SM-ECOC-V2 achieves the best classification performance based on both all bands and feature bands. Furthermore, both SM-ECOC-V1 and SM-ECOC-V2 outperform the ECOC method under SR coding strategy, indicating the proposed methods can effectively avoid the influence of 0-element in SR coding matrix on classification performance. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

16 pages, 31866 KiB  
Article
DAN-SuperPoint: Self-Supervised Feature Point Detection Algorithm with Dual Attention Network
by Zhaoyang Li, Jie Cao, Qun Hao, Xue Zhao, Yaqian Ning and Dongxing Li
Sensors 2022, 22(5), 1940; https://doi.org/10.3390/s22051940 - 2 Mar 2022
Cited by 5 | Viewed by 4719
Abstract
In view of the poor performance of traditional feature point detection methods in low-texture situations, we design a new self-supervised feature extraction network that can be applied to the visual odometer (VO) front-end feature extraction module based on the deep learning method. First, [...] Read more.
In view of the poor performance of traditional feature point detection methods in low-texture situations, we design a new self-supervised feature extraction network that can be applied to the visual odometer (VO) front-end feature extraction module based on the deep learning method. First, the network uses the feature pyramid structure to perform multi-scale feature fusion to obtain a feature map containing multi-scale information. Then, the feature map is passed through the position attention module and the channel attention module to obtain the feature dependency relationship of the spatial dimension and the channel dimension, respectively, and the weighted spatial feature map and the channel feature map are added element by element to enhance the feature representation. Finally, the weighted feature maps are trained for detectors and descriptors respectively. In addition, in order to improve the prediction accuracy of feature point locations and speed up the network convergence, we add a confidence loss term and a tolerance loss term to the loss functions of the detector and descriptor, respectively. The experiments show that our network achieves satisfactory performance under the Hpatches dataset and KITTI dataset, indicating the reliability of the network. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

28 pages, 6651 KiB  
Article
Mixed Structure with 3D Multi-Shortcut-Link Networks for Hyperspectral Image Classification
by Hui Zheng, Yizhi Cao, Min Sun, Guihai Guo, Junzhen Meng, Xinwei Guo and Yanchi Jiang
Remote Sens. 2022, 14(5), 1230; https://doi.org/10.3390/rs14051230 - 2 Mar 2022
Cited by 1 | Viewed by 2750
Abstract
A hyperspectral image classification method based on a mixed structure with a 3D multi-shortcut-link network (MSLN) was proposed for the features of few labeled samples, excess noise, and heterogeneous homogeneity of features in hyperspectral images. First, the spatial–spectral joint features of hyperspectral cube [...] Read more.
A hyperspectral image classification method based on a mixed structure with a 3D multi-shortcut-link network (MSLN) was proposed for the features of few labeled samples, excess noise, and heterogeneous homogeneity of features in hyperspectral images. First, the spatial–spectral joint features of hyperspectral cube data were extracted through 3D convolution operation; then, the deep network was constructed and the 3D MSLN mixed structure was used to fuse shallow representational features and deep abstract features, while the hybrid activation function was utilized to ensure the integrity of nonlinear data. Finally, the global self-adaptive average pooling and L-softmax classifier were introduced to implement the terrain classification of hyperspectral images. The mixed structure proposed in this study could extract multi-channel features with a vast receptive field and reduce the continuous decay of shallow features while improving the utilization of representational features and enhancing the expressiveness of the deep network. The use of the dropout mechanism and L-softmax classifier endowed the learned features with a better generalization property and intraclass cohesion and interclass separation properties. Through experimental comparative analysis of six groups of datasets, the results showed that this method, compared with the existing deep-learning-based hyperspectral image classification methods, could satisfactorily address the issues of degeneration of the deep network and “the same object with distinct spectra, and distinct objects with the same spectrum.” It could also effectively improve the terrain classification accuracy of hyperspectral images, as evinced by the overall classification accuracies of all classes of terrain objects in the six groups of datasets: 97.698%, 98.851%, 99.54%, 97.961%, 97.698%, and 99.138%. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 8855 KiB  
Article
Multi-Sensor Fusion Self-Supervised Deep Odometry and Depth Estimation
by Yingcai Wan, Qiankun Zhao, Cheng Guo, Chenlong Xu and Lijing Fang
Remote Sens. 2022, 14(5), 1228; https://doi.org/10.3390/rs14051228 - 2 Mar 2022
Cited by 5 | Viewed by 4309
Abstract
This paper presents a new deep visual-inertial odometry and depth estimation framework for improving the accuracy of depth estimation and ego-motion from image sequences and inertial measurement unit (IMU) raw data. The proposed framework predicts ego-motion and depth with absolute scale in a [...] Read more.
This paper presents a new deep visual-inertial odometry and depth estimation framework for improving the accuracy of depth estimation and ego-motion from image sequences and inertial measurement unit (IMU) raw data. The proposed framework predicts ego-motion and depth with absolute scale in a self-supervised manner. We first capture dense features and solve the pose by deep visual odometry (DVO), and then combine the pose estimation pipeline with deep inertial odometry (DIO) by the extended Kalman filter (EKF) method to produce the sparse depth and pose with absolute scale. We then join deep visual-inertial odometry (DeepVIO) with depth estimation by using sparse depth and the pose from DeepVIO pipeline to align the scale of the depth prediction with the triangulated point cloud and reduce image reconstruction error. Specifically, we use the strengths of learning-based visual-inertial odometry (VIO) and depth estimation to build an end-to-end self-supervised learning architecture. We evaluated the new framework on the KITTI datasets and compared it to the previous techniques. We show that our approach improves results for ego-motion estimation and achieves comparable results for depth estimation, especially in the detail area. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

14 pages, 3592 KiB  
Article
Deep Convolutional Neural Network-Based Hemiplegic Gait Detection Using an Inertial Sensor Located Freely in a Pocket
by Hangsik Shin
Sensors 2022, 22(5), 1920; https://doi.org/10.3390/s22051920 - 1 Mar 2022
Cited by 6 | Viewed by 2852
Abstract
In most previous studies, the acceleration sensor is attached to a fixed position for gait analysis. However, if it is aimed at daily use, wearing it in a fixed position may cause discomfort. In addition, since an acceleration sensor can be built into [...] Read more.
In most previous studies, the acceleration sensor is attached to a fixed position for gait analysis. However, if it is aimed at daily use, wearing it in a fixed position may cause discomfort. In addition, since an acceleration sensor can be built into the smartphones that people always carry, it is more efficient to use such a sensor rather than wear a separate acceleration sensor. We aimed to distinguish between hemiplegic and normal walking by using the inertial signal measured by means of an acceleration sensor and a gyroscope. We used a machine learning model based on a convolutional neural network to classify hemiplegic gaits and used the acceleration and angular velocity signals obtained from a system freely located in the pocket as inputs without any pre-processing. The classification model structure and hyperparameters were optimized using Bayesian optimization method. We evaluated the performance of the developed model through a clinical trial, which included a walking test of 42 subjects (57.8 ± 13.8 years old, 165.1 ± 9.3 cm tall, weighing 66.3 ± 12.3 kg) including 21 hemiplegic patients. The optimized convolutional neural network model has a convolutional layer, with number of fully connected nodes of 1033, batch size of 77, learning rate of 0.001, and dropout rate of 0.48. The developed model showed an accuracy of 0.78, a precision of 0.80, a recall of 0.80, an area under the receiver operating characteristic curve of 0.80, and an area under the precision–recall curve of 0.84. We confirmed the possibility of distinguishing a hemiplegic gait by applying the convolutional neural network to the signal measured by a six-axis inertial sensor freely located in the pocket without additional pre-processing or feature extraction. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

22 pages, 1350 KiB  
Article
A Gradually Linearizing Kalman Filter Bank Designing for Product-Type Strong Nonlinear Systems
by Chenglin Wen and Zhipeng Lin
Electronics 2022, 11(5), 714; https://doi.org/10.3390/electronics11050714 - 25 Feb 2022
Cited by 2 | Viewed by 1521
Abstract
Our study aimed to improve the poor performance of existing filters, such as EKF, UKF and CKF, that results from their weak approximation ability to nonlinear systems. This paper proposes a new extended Kalman filter bank focusing on a class of product-type strong [...] Read more.
Our study aimed to improve the poor performance of existing filters, such as EKF, UKF and CKF, that results from their weak approximation ability to nonlinear systems. This paper proposes a new extended Kalman filter bank focusing on a class of product-type strong nonlinear systems composed by system state variables, time-varying parameters and non-linear basic functions. Firstly, the non-linear basic functions are defined as hidden variables corresponding to system state variables, and then the strong nonlinear systems are described simplistically. Secondly, we discuss building two dynamic models between their future values of parameters, as well as hidden variables and their current values based on the given prior information. Thirdly, we recount how an extended Kalman filter bank was designed by gradually linearizing the strong nonlinear systems about system state variables, time-varying parameters and hidden variables, respectively. The first extended Kalman filter about future hidden variables was designed by using these estimates of the state variables and parameters, as well as hidden variables at current. The second extended Kalman filter about future parameters variables was designed by using these estimates of the current state variables and parameters, as well as future hidden variables. The third extended Kalman filter about future state variables was designed by using these estimates of the current state variables, as well as future parameters and hidden variables. Fourthly, we used digital simulation experiments to verify the effectiveness of this method. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

21 pages, 20242 KiB  
Article
VGGFace-Ear: An Extended Dataset for Unconstrained Ear Recognition
by Solange Ramos-Cooper, Erick Gomez-Nieto and Guillermo Camara-Chavez
Sensors 2022, 22(5), 1752; https://doi.org/10.3390/s22051752 - 23 Feb 2022
Cited by 14 | Viewed by 7067
Abstract
Recognition using ear images has been an active field of research in recent years. Besides faces and fingerprints, ears have a unique structure to identify people and can be captured from a distance, contactless, and without the subject’s cooperation. Therefore, it represents an [...] Read more.
Recognition using ear images has been an active field of research in recent years. Besides faces and fingerprints, ears have a unique structure to identify people and can be captured from a distance, contactless, and without the subject’s cooperation. Therefore, it represents an appealing choice for building surveillance, forensic, and security applications. However, many techniques used in those applications—e.g., convolutional neural networks (CNN)—usually demand large-scale datasets for training. This research work introduces a new dataset of ear images taken under uncontrolled conditions that present high inter-class and intra-class variability. We built this dataset using an existing face dataset called the VGGFace, which gathers more than 3.3 million images. in addition, we perform ear recognition using transfer learning with CNN pretrained on image and face recognition. Finally, we performed two experiments on two unconstrained datasets and reported our results using Rank-based metrics. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 5743 KiB  
Article
Automatic Dynamic Range Adjustment for Pedestrian Detection in Thermal (Infrared) Surveillance Videos
by Oluwakorede Monica Oluyide, Jules-Raymond Tapamo and Tom Mmbasu Walingo
Sensors 2022, 22(5), 1728; https://doi.org/10.3390/s22051728 - 23 Feb 2022
Cited by 7 | Viewed by 2588
Abstract
This paper presents a novel candidate generation algorithm for pedestrian detection in infrared surveillance videos. The proposed method uses a combination of histogram specification and iterative histogram partitioning to progressively adjust the dynamic range and efficiently suppress the background of each video frame. [...] Read more.
This paper presents a novel candidate generation algorithm for pedestrian detection in infrared surveillance videos. The proposed method uses a combination of histogram specification and iterative histogram partitioning to progressively adjust the dynamic range and efficiently suppress the background of each video frame. This pairing eliminates the general-purpose nature associated with histogram partitioning where chosen thresholds, although reasonable, are usually not suitable for specific purposes. Moreover, as the initial threshold value chosen by histogram partitioning is sensitive to the shape of the histogram, specifying a uniformly distributed histogram before initial partitioning provides a stable histogram shape. This ensures that pedestrians are present in the image at the convergence point of the algorithm. The performance of the method is tested using four publicly available thermal datasets. Experiments were performed with images from four publicly available databases. The results show the improvement of the proposed method over thresholding with minimum-cross entropy, the robustness across images acquired under different conditions, and the comparable results with other methods in the literature. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

15 pages, 1647 KiB  
Article
SiamMixer: A Lightweight and Hardware-Friendly Visual Object-Tracking Network
by Li Cheng, Xuemin Zheng, Mingxin Zhao, Runjiang Dou, Shuangming Yu, Nanjian Wu and Liyuan Liu
Sensors 2022, 22(4), 1585; https://doi.org/10.3390/s22041585 - 18 Feb 2022
Cited by 6 | Viewed by 2961
Abstract
Siamese networks have been extensively studied in recent years. Most of the previous research focuses on improving accuracy, while merely a few recognize the necessity of reducing parameter redundancy and computation load. Even less work has been done to optimize the runtime memory [...] Read more.
Siamese networks have been extensively studied in recent years. Most of the previous research focuses on improving accuracy, while merely a few recognize the necessity of reducing parameter redundancy and computation load. Even less work has been done to optimize the runtime memory cost when designing networks, making the Siamese-network-based tracker difficult to deploy on edge devices. In this paper, we present SiamMixer, a lightweight and hardware-friendly visual object-tracking network. It uses patch-by-patch inference to reduce memory use in shallow layers, where each small image region is processed individually. It merges and globally encodes feature maps in deep layers to enhance accuracy. Benefiting from these techniques, SiamMixer demonstrates a comparable accuracy to other large trackers with only 286 kB parameters and 196 kB extra memory use for feature maps. Additionally, we verify the impact of various activation functions and replace all activation functions with ReLU in SiamMixer. This reduces the cost when deploying on mobile devices. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

20 pages, 3941 KiB  
Article
Reinforcement Learning for Compressed-Sensing Based Frequency Agile Radar in the Presence of Active Interference
by Shanshan Wang, Zheng Liu, Rong Xie and Lei Ran
Remote Sens. 2022, 14(4), 968; https://doi.org/10.3390/rs14040968 - 16 Feb 2022
Cited by 10 | Viewed by 2359
Abstract
Compressed sensing (CS)-based frequency agile radar (FAR) is attractive due to its superior data rate and target measurement performance. However, traditional frequency strategies for CS-based FAR are not cognitive enough to adapt well to the increasingly severe active interference environment. In this paper, [...] Read more.
Compressed sensing (CS)-based frequency agile radar (FAR) is attractive due to its superior data rate and target measurement performance. However, traditional frequency strategies for CS-based FAR are not cognitive enough to adapt well to the increasingly severe active interference environment. In this paper, we propose a cognitive frequency design method for CS-based FAR using reinforcement learning (RL). Specifically, we formulate the frequency design of CS-based FAR as a model-free partially observable Markov decision process (POMDP) to cope with the non-cooperation of the active interference environment. Then, a recognizer-based belief state computing method is proposed to relieve the storage and computation burdens in solving the model-free POMDP. This method is independent of the environmental knowledge and robust to the sensing scenario. Finally, the double deep Q network-based method using the exploration strategy integrating the CS-based recovery metric into the ϵ-greedy strategy (DDQN-CSR-ϵ-greedy) is proposed to solve the model-free POMDP. This can achieve better target measurement performance while avoiding active interference compared to the existing techniques. A number of examples are presented to demonstrate the effectiveness and advantage of the proposed design. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

16 pages, 1057 KiB  
Article
A Fingerprint-Based Verification Framework Using Harris and SURF Feature Detection Algorithms
by Samy Bakheet, Ayoub Al-Hamadi and Rehab Youssef
Appl. Sci. 2022, 12(4), 2028; https://doi.org/10.3390/app12042028 - 15 Feb 2022
Cited by 19 | Viewed by 6315
Abstract
Amongst all biometric-based personal authentication systems, a fingerprint that gives each person a unique identity is the most commonly used parameter for personal identification. In this paper, we present an automatic fingerprint-based authentication framework by means of fingerprint enhancement, feature extraction, and matching [...] Read more.
Amongst all biometric-based personal authentication systems, a fingerprint that gives each person a unique identity is the most commonly used parameter for personal identification. In this paper, we present an automatic fingerprint-based authentication framework by means of fingerprint enhancement, feature extraction, and matching techniques. Initially, a variant of adaptive histogram equalization called CLAHE (contrast limited adaptive histogram equalization) along with a combination of FFT (fast Fourier transform), and Gabor filters are applied to enhance the contrast of fingerprint images. The fingerprint is then authenticated by picking a small amount of information from some local interest points called minutiae point features. These features are extracted from the thinned binary fingerprint image with a hybrid combination of Harris and SURF feature detectors to render significantly improved detection results. For fingerprint matching, the Euclidean distance between the corresponding Harris-SURF feature vectors of two feature points is used as a feature matching similarity measure of two fingerprint images. Moreover, an iterative algorithm called RANSAC (RANdom SAmple Consensus) is applied for fine matching and to automatically eliminate false matches and incorrect match points. Quantitative experimental results achieved on FVC2002 DB1 and FVC2000 DB1 public domain fingerprint databases demonstrate the good performance and feasibility of the proposed framework in terms of achieving average recognition rates of 95% and 92.5% for FVC2002 DB1 and FVC2000 DB1 databases, respectively. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

18 pages, 4571 KiB  
Article
Keypoint-Aware Single-Stage 3D Object Detector for Autonomous Driving
by Wencai Xu, Jie Hu, Ruinan Chen, Yongpeng An, Zongquan Xiong and Han Liu
Sensors 2022, 22(4), 1451; https://doi.org/10.3390/s22041451 - 14 Feb 2022
Cited by 5 | Viewed by 2476
Abstract
Current single-stage 3D object detectors often use predefined single points of feature maps to generate confidence scores. However, the point feature not only lacks the boundaries and inner features but also does not establish an explicit association between regression box and confidence scores. [...] Read more.
Current single-stage 3D object detectors often use predefined single points of feature maps to generate confidence scores. However, the point feature not only lacks the boundaries and inner features but also does not establish an explicit association between regression box and confidence scores. In this paper, we present a novel single-stage object detector called keypoint-aware single-stage 3D object detector (KASSD). First, we design a lightweight location attention module (LLM), including feature reuse strategy (FRS) and location attention module (LAM). The FRS can facilitate the flow of spatial information. By considering the location, the LAM adopts weighted feature fusion to obtain efficient multi-level feature representation. To alleviate the inconsistencies mentioned above, we introduce a keypoint-aware module (KAM). The KAM can model spatial relationships and learn rich semantic information by representing the predicted object as a set of keypoints. We conduct experiments on the KITTI dataset. The experimental results show that our method has a competitive performance with 79.74% AP on a moderate difficulty level while maintaining 21.8 FPS inference speed. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

17 pages, 3096 KiB  
Article
A Limited-View CT Reconstruction Framework Based on Hybrid Domains and Spatial Correlation
by Ken Deng, Chang Sun, Wuxuan Gong, Yitong Liu and Hongwen Yang
Sensors 2022, 22(4), 1446; https://doi.org/10.3390/s22041446 - 13 Feb 2022
Cited by 1 | Viewed by 2910
Abstract
Limited-view Computed Tomography (CT) can be used to efficaciously reduce radiation dose in clinical diagnosis, it is also adopted when encountering inevitable mechanical and physical limitation in industrial inspection. Nevertheless, limited-view CT leads to severe artifacts in its imaging, which turns out to [...] Read more.
Limited-view Computed Tomography (CT) can be used to efficaciously reduce radiation dose in clinical diagnosis, it is also adopted when encountering inevitable mechanical and physical limitation in industrial inspection. Nevertheless, limited-view CT leads to severe artifacts in its imaging, which turns out to be a major issue in the low dose protocol. Thus, how to exploit the limited prior information to obtain high-quality CT images becomes a crucial issue. We notice that almost all existing methods solely focus on a single CT image while neglecting the solid fact that, the scanned objects are always highly spatially correlated. Consequently, there lies bountiful spatial information between these acquired consecutive CT images, which is still largely left to be exploited. In this paper, we propose a novel hybrid-domain structure composed of fully convolutional networks that groundbreakingly explores the three-dimensional neighborhood and works in a “coarse-to-fine” manner. We first conduct data completion in the Radon domain, and transform the obtained full-view Radon data into images through FBP. Subsequently, we employ the spatial correlation between continuous CT images to productively restore them and then refine the image texture to finally receive the ideal high-quality CT images, achieving PSNR of 40.209 and SSIM of 0.943. Besides, unlike other current limited-view CT reconstruction methods, we adopt FBP (and implement it on GPUs) instead of SART-TV to significantly accelerate the overall procedure and realize it in an end-to-end manner. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 57155 KiB  
Article
Point Cloud Segmentation from iPhone-Based LiDAR Sensors Using the Tensor Feature
by Xuan Wang, Haiyang Lyu, Tianyi Mao, Weiji He and Qian Chen
Appl. Sci. 2022, 12(4), 1817; https://doi.org/10.3390/app12041817 - 10 Feb 2022
Cited by 6 | Viewed by 3679
Abstract
With widely used LiDAR sensors included in consumer electronic devices, it is increasingly convenient to acquire point cloud data, but it is also difficult to segment the point cloud data obtained from these unprofessional LiDAR devices, due to their low accuracy and high [...] Read more.
With widely used LiDAR sensors included in consumer electronic devices, it is increasingly convenient to acquire point cloud data, but it is also difficult to segment the point cloud data obtained from these unprofessional LiDAR devices, due to their low accuracy and high noise. To address the issue, a point cloud segmentation method using the tensor feature is proposed. The normal vectors of the point cloud are computed based on initial tensor encoding, which are further encoded into the tensor of each point. Using the tensor from a nearby point, the tensor of the center point is aggregated in all dimensions from its neighborhood. Then, the tensor feature in the point is decomposed and different dimensional shape features are detected, and the point cloud dataset is segmented based on the clustering of the tensor feature. Using the point cloud dataset acquired from the iPhone-based LiDAR sensor, experiments were conducted, and results show that both normal vectors and tensors are computed, then the dataset is successfully segmented. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

16 pages, 3018 KiB  
Article
Mechanism Analysis and Self-Adaptive RBFNN Based Hybrid Soft Sensor Model in Energy Production Process: A Case Study
by Junrong Du, Jian Zhang, Laishun Yang, Xuzhi Li, Lili Guo and Lei Song
Sensors 2022, 22(4), 1333; https://doi.org/10.3390/s22041333 - 10 Feb 2022
Cited by 5 | Viewed by 2359
Abstract
Despite hard sensors can be easily used in various condition monitoring of energy production process, soft sensors are confined to some specific scenarios due to difficulty installation requirements and complex work conditions. However, industrial process may refer to complex control and operation, the [...] Read more.
Despite hard sensors can be easily used in various condition monitoring of energy production process, soft sensors are confined to some specific scenarios due to difficulty installation requirements and complex work conditions. However, industrial process may refer to complex control and operation, the extraction of relevant information from abundant sensors data may be challenging, and description of complicated process data patterns is also becoming a hot topic in soft-sensor development. In this paper, a hybrid soft sensor model based mechanism analysis and data-driven is proposed, and ventilation sensing of coal mill in a power plant is conducted as a case study. Firstly, mechanism model of ventilation is established via mass and energy conservation law, and object-relevant features are identified as the inputs of data-driven method. Secondly, radial basis function neural network (RBFNN) is used for soft sensor modeling, and genetic algorithm (GA) is adopted for quick and accurate determination of the RBFNN hyper-parameters, thus self-adaptive RBFNN (SA-RBFNN) is proposed to improve the soft sensor performance in energy production process. Finally, effectiveness of the proposed method is verified on a real-world power plant dataset, taking coal mill ventilation soft sensing as a case study. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

16 pages, 3603 KiB  
Article
Optimal Architecture of Floating-Point Arithmetic for Neural Network Training Processors
by Muhammad Junaid, Saad Arslan, TaeGeon Lee and HyungWon Kim
Sensors 2022, 22(3), 1230; https://doi.org/10.3390/s22031230 - 6 Feb 2022
Cited by 7 | Viewed by 5319
Abstract
The convergence of artificial intelligence (AI) is one of the critical technologies in the recent fourth industrial revolution. The AIoT (Artificial Intelligence Internet of Things) is expected to be a solution that aids rapid and secure data processing. While the success of AIoT [...] Read more.
The convergence of artificial intelligence (AI) is one of the critical technologies in the recent fourth industrial revolution. The AIoT (Artificial Intelligence Internet of Things) is expected to be a solution that aids rapid and secure data processing. While the success of AIoT demanded low-power neural network processors, most of the recent research has been focused on accelerator designs only for inference. The growing interest in self-supervised and semi-supervised learning now calls for processors offloading the training process in addition to the inference process. Incorporating training with high accuracy goals requires the use of floating-point operators. The higher precision floating-point arithmetic architectures in neural networks tend to consume a large area and energy. Consequently, an energy-efficient/compact accelerator is required. The proposed architecture incorporates training in 32 bits, 24 bits, 16 bits, and mixed precisions to find the optimal floating-point format for low power and smaller-sized edge device. The proposed accelerator engines have been verified on FPGA for both inference and training of the MNIST image dataset. The combination of 24-bit custom FP format with 16-bit Brain FP has achieved an accuracy of more than 93%. ASIC implementation of this optimized mixed-precision accelerator using TSMC 65nm reveals an active area of 1.036 × 1.036 mm2 and energy consumption of 4.445 µJ per training of one image. Compared with 32-bit architecture, the size and the energy are reduced by 4.7 and 3.91 times, respectively. Therefore, the CNN structure using floating-point numbers with an optimized data path will significantly contribute to developing the AIoT field that requires a small area, low energy, and high accuracy. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

26 pages, 3234 KiB  
Review
Analytical Review of Event-Based Camera Depth Estimation Methods and Systems
by Justas Furmonas, John Liobe and Vaidotas Barzdenas
Sensors 2022, 22(3), 1201; https://doi.org/10.3390/s22031201 - 5 Feb 2022
Cited by 18 | Viewed by 7717
Abstract
Event-based cameras have increasingly become more commonplace in the commercial space as the performance of these cameras has also continued to increase to the degree where they can exponentially outperform their frame-based counterparts in many applications. However, instantiations of event-based cameras for depth [...] Read more.
Event-based cameras have increasingly become more commonplace in the commercial space as the performance of these cameras has also continued to increase to the degree where they can exponentially outperform their frame-based counterparts in many applications. However, instantiations of event-based cameras for depth estimation are sparse. After a short introduction detailing the salient differences and features of an event-based camera compared to that of a traditional, frame-based one, this work summarizes the published event-based methods and systems known to date. An analytical review of these methods and systems is performed, justifying the conclusions drawn. This work is concluded with insights and recommendations for further development in the field of event-based camera depth estimation. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

26 pages, 5912 KiB  
Article
State of Charge Estimation of Battery Based on Neural Networks and Adaptive Strategies with Correntropy
by Rômulo Navega Vieira, Juan Moises Mauricio Villanueva, Thommas Kevin Sales Flores and Euler Cássio Tavares de Macêdo
Sensors 2022, 22(3), 1179; https://doi.org/10.3390/s22031179 - 4 Feb 2022
Cited by 12 | Viewed by 2864
Abstract
Nowadays, electric vehicles have gained great popularity due to their performance and efficiency. Investment in the development of this new technology is justified by increased consciousness of the environmental impacts caused by combustion vehicles such as greenhouse gas emissions, which have contributed to [...] Read more.
Nowadays, electric vehicles have gained great popularity due to their performance and efficiency. Investment in the development of this new technology is justified by increased consciousness of the environmental impacts caused by combustion vehicles such as greenhouse gas emissions, which have contributed to global warming as well as the depletion of non-oil renewable energy source. The lithium-ion battery is an appropriate choice for electric vehicles (EVs) due to its promising features of high voltage, high energy density, low self-discharge, and long life cycles. In this context, State of Charge (SoC) is one of the vital parameters of the battery management system (BMS). Nevertheless, because the discharge and charging of battery cells requires complicated chemical operations, it is therefore hard to determine the state of charge of the battery cell. This paper analyses the application of Artificial Neural Networks (ANNs) in the estimation of the SoC of lithium batteries using the NASA’s research center dataset. Normally, the learning of these networks is performed by some method based on a gradient, having the mean squared error as a cost function. This paper evaluates the substitution of this traditional function by a measure of similarity of the Information Theory, called the Maximum Correntropy Criterion (MCC). This measure of similarity allows statistical moments of a higher order to be considered during the training process. For this reason, it becomes more appropriate for non-Gaussian error distributions and makes training less sensitive to the presence of outliers. However, this can only be achieved by properly adjusting the width of the Gaussian kernel of the correntropy. The proper tuning of this parameter is done using adaptive strategies and genetic algorithms. The proposed identification model was developed using information for training and validation, using a dataset made available in a online repository maintained by NASA’s research center. The obtained results demonstrate that the use of correntropy, as a cost function in the error backpropagation algorithm, makes the identification procedure using ANN networks more robust when compared to the traditional Mean Squared Error. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

13 pages, 1194 KiB  
Article
A Machine-Learning Model for Lung Age Forecasting by Analyzing Exhalations
by Marc Pifarré, Alberto Tena, Francisco Clarià, Francesc Solsona, Jordi Vilaplana, Arnau Benavides, Lluis Mas and Francesc Abella
Sensors 2022, 22(3), 1106; https://doi.org/10.3390/s22031106 - 1 Feb 2022
Cited by 2 | Viewed by 3396
Abstract
Spirometers are important devices for following up patients with respiratory diseases. These are mainly located only at hospitals, with all the disadvantages that this can entail. This limits their use and consequently, the supervision of patients. Research efforts focus on providing digital alternatives [...] Read more.
Spirometers are important devices for following up patients with respiratory diseases. These are mainly located only at hospitals, with all the disadvantages that this can entail. This limits their use and consequently, the supervision of patients. Research efforts focus on providing digital alternatives to spirometers. Although less accurate, the authors claim they are cheaper and usable by many more people worldwide at any given time and place. In order to further popularize the use of spirometers even more, we are interested in also providing user-friendly lung-capacity metrics instead of the traditional-spirometry ones. The main objective, which is also the main contribution of this research, is to obtain a person’s lung age by analyzing the properties of their exhalation by means of a machine-learning method. To perform this study, 188 samples of blowing sounds were used. These were taken from 91 males (48.4%) and 97 females (51.6%) aged between 17 and 67. A total of 42 spirometer and frequency-like features, including gender, were used. Traditional machine-learning algorithms used in voice recognition applied to the most significant features were used. We found that the best classification algorithm was the Quadratic Linear Discriminant algorithm when no distinction was made between gender. By splitting the corpus into age groups of 5 consecutive years, accuracy, sensitivity and specificity of, respectively, 94.69%, 94.45% and 99.45% were found. Features in the audio of users’ expiration that allowed them to be classified by their corresponding lung age group of 5 years were successfully detected. Our methodology can become a reliable tool for use with mobile devices to detect lung abnormalities or diseases. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 4108 KiB  
Article
Geo-Electrical Detection of Impermeable Membranes in the Subsurface
by Marios Karaoulis, Pauline P. Kruiver, Victor Hopman and Bob Beuving
Appl. Sci. 2022, 12(3), 1555; https://doi.org/10.3390/app12031555 - 31 Jan 2022
Viewed by 2144
Abstract
In this paper, we have investigate looking ahead capabilities of electrical resistivity tomography (ERT) for a soil penetrating tool with electrodes. In our case study, the desired detection resolution (10 to 20 cm at a depth of at least 6 m) was much [...] Read more.
In this paper, we have investigate looking ahead capabilities of electrical resistivity tomography (ERT) for a soil penetrating tool with electrodes. In our case study, the desired detection resolution (10 to 20 cm at a depth of at least 6 m) was much higher than what can be achieved from classical surface ERT measurements. Therefore, we designed a logging-type tool, that can be pushed into the ground. Our target was a buried PVC membrane which acts as an electrical insulator. In this phase, we performed numerical simulations and laboratory measurements. The methodology is based on a two-step approach. First, we calculate the background resistivity of the tool’s path is determined by inversion of near-looking electrode configurations. Next, the theoretical response (kernel) of the far-looking configurations is calculated for different membrane positions. The root mean square (RMS) error between the kernel and the measurements is minimized to detect the membrane. If the membrane is within sensing reach, the RMS has a minimum for the kernel corresponding to the true membrane position. If no minimum in RMS is found, the membrane is not within sensing reach and the tool can be pushed closer to the membrane. The laboratory tests comprised measurements in a tank filled with either water or saturated sand or saturated sand with clay slabs and chunks. The laboratory results were successful in pinpointing the position of the membrane with an accuracy of 10 to 20 cm, depending on the dimension of the tool and the distance from the membrane. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

15 pages, 18359 KiB  
Article
Deep Modular Bilinear Attention Network for Visual Question Answering
by Feng Yan, Wushouer Silamu and Yanbing Li
Sensors 2022, 22(3), 1045; https://doi.org/10.3390/s22031045 - 28 Jan 2022
Cited by 7 | Viewed by 3590
Abstract
VQA (Visual Question Answering) is a multi-model task. Given a picture and a question related to the image, it will determine the correct answer. The attention mechanism has become a de facto component of almost all VQA models. Most recent VQA approaches use [...] Read more.
VQA (Visual Question Answering) is a multi-model task. Given a picture and a question related to the image, it will determine the correct answer. The attention mechanism has become a de facto component of almost all VQA models. Most recent VQA approaches use dot-product to calculate the intra-modality and inter-modality attention between visual and language features. In this paper, the BAN (Bilinear Attention Network) method was used to calculate attention. We propose a deep multimodality bilinear attention network (DMBA-NET) framework with two basic attention units (BAN-GA and BAN-SA) to construct inter-modality and intra-modality relations. The two basic attention units are the core of the whole network framework and can be cascaded in depth. In addition, we encode the question based on the dynamic word vector of BERT(Bidirectional Encoder Representations from Transformers), then use self-attention to process the question features further. Then we sum them with the features obtained by BAN-GA and BAN-SA before the final classification. Without using the Visual Genome datasets for augmentation, the accuracy of our model reaches 70.85% on the test-std dataset of VQA 2.0. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

18 pages, 3318 KiB  
Article
Human Activity and Motion Pattern Recognition within Indoor Environment Using Convolutional Neural Networks Clustering and Naive Bayes Classification Algorithms
by Ashraf Ali, Weam Samara, Doaa Alhaddad, Andrew Ware and Omar A. Saraereh
Sensors 2022, 22(3), 1016; https://doi.org/10.3390/s22031016 - 28 Jan 2022
Cited by 17 | Viewed by 4703
Abstract
Human Activity Recognition (HAR) systems are designed to read sensor data and analyse it to classify any detected movement and respond accordingly. However, there is a need for more responsive and near real-time systems to distinguish between false and true alarms. To accurately [...] Read more.
Human Activity Recognition (HAR) systems are designed to read sensor data and analyse it to classify any detected movement and respond accordingly. However, there is a need for more responsive and near real-time systems to distinguish between false and true alarms. To accurately determine alarm triggers, the motion pattern of legitimate users need to be stored over a certain period and used to train the system to recognise features associated with their movements. This training process is followed by a testing cycle that uses actual data of different patterns of activity that are either similar or different to the training data set. This paper evaluates the use of a combined Convolutional Neural Network (CNN) and Naive Bayes for accuracy and robustness to correctly identify true alarm triggers in the form of a buzzer sound for example. It shows that pattern recognition can be achieved using either of the two approaches, even when a partial motion pattern is derived as a subset out of a full-motion path. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

15 pages, 3772 KiB  
Article
Computer Vision-Based Classification of Flow Regime and Vapor Quality in Vertical Two-Phase Flow
by Shai Kadish, David Schmid, Jarryd Son and Edward Boje
Sensors 2022, 22(3), 996; https://doi.org/10.3390/s22030996 - 27 Jan 2022
Cited by 9 | Viewed by 3500
Abstract
This paper presents a method to classify flow regime and vapor quality in vertical two-phase (vapor-liquid) flow, using a video of the flow as the input; this represents the first high-performing and entirely camera image-based method for the classification of a vertical flow [...] Read more.
This paper presents a method to classify flow regime and vapor quality in vertical two-phase (vapor-liquid) flow, using a video of the flow as the input; this represents the first high-performing and entirely camera image-based method for the classification of a vertical flow regime (which is effective across a wide range of regimes) and the first image-based tool for estimating vapor quality. The approach makes use of computer vision techniques and deep learning to train a convolutional neural network (CNN), which is used for individual frame classification and image feature extraction, and a deep long short-term memory (LSTM) network, used to capture temporal information present in a sequence of image feature sets and to make a final vapor quality or flow regime classification. This novel architecture for two-phase flow studies achieves accurate flow regime and vapor quality classifications in a practical application to two-phase CO2 flow in vertical tubes, based on offline data and an online prototype implementation, developed as a proof of concept for the use of these models within a feedback control loop. The use of automatically selected image features, produced by a CNN architecture in three distinct tasks comprising flow-image classification, flow-regime classification, and vapor quality prediction, confirms that these features are robust and useful, and offer a viable alternative to manually extracting image features for image-based flow studies. The successful application of the LSTM network reveals the significance of temporal information for image-based studies of two-phase flow. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

24 pages, 9567 KiB  
Article
Study on Data-Driven Approaches for the Automated Assembly of Board-to-Board Connectors
by Hsien-I Lin, Fauzy Satrio Wibowo and Ashutosh Kumar Singh
Appl. Sci. 2022, 12(3), 1216; https://doi.org/10.3390/app12031216 - 24 Jan 2022
Cited by 1 | Viewed by 3384
Abstract
The mating of the board-to-board (BtB) connector is rugged because of its design complexity, small pitch (0.4 mm), and susceptibility to damage. Currently, the assembly task of BtB connectors is performed manually because of these factors. A high chance of damage to the [...] Read more.
The mating of the board-to-board (BtB) connector is rugged because of its design complexity, small pitch (0.4 mm), and susceptibility to damage. Currently, the assembly task of BtB connectors is performed manually because of these factors. A high chance of damage to the connectors can also occur during the mating process. Thus, it is essential to automate the assembly process to ensure its safety and reliability during the mating process. Commonly, the mating procedure adopts a model-based approach, including error recovery methods, owing to less design complexity and fewer pins with a high pitch. However, we propose a data-driven approach prediction for the mating process of the fine pitch 0.4 mm board-to-board connector utilizing a manipulator arm and force sensor. The data-driven approach uses force data for time series encoding methods such as recurrence plot (RP), Gramian matrix, k-nearest neighbor dynamic time warping (kNN-DTW), Markov transition field (MTF), and long short-term memory (LSTM) to compare each of the model prediction performances. The proposed method combines the RP model with the convolutional neural network (RP-CNN) to predict the force data. In the experiment, the proposed RP-CNN model used two final layers, SoftMax and L2-SVM, to compare with the other prediction models mentioned above. The output of the data-driven prediction is the coordinate alignment of the female board-to-board connector with the male board-to-board connector based on the value of force. Based on the experiment, the encoding approach, especially RP-CNN (L2-SVM), outperformed all prediction models as mentioned above with an accuracy of 86%. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

18 pages, 3074 KiB  
Article
Spatial-Temporal Convolutional Transformer Network for Multivariate Time Series Forecasting
by Lei Huang, Feng Mao, Kai Zhang and Zhiheng Li
Sensors 2022, 22(3), 841; https://doi.org/10.3390/s22030841 - 22 Jan 2022
Cited by 23 | Viewed by 9223
Abstract
Multivariate time series forecasting has long been a research hotspot because of its wide range of application scenarios. However, the dynamics and multiple patterns of spatiotemporal dependencies make this problem challenging. Most existing methods suffer from two major shortcomings: (1) They ignore the [...] Read more.
Multivariate time series forecasting has long been a research hotspot because of its wide range of application scenarios. However, the dynamics and multiple patterns of spatiotemporal dependencies make this problem challenging. Most existing methods suffer from two major shortcomings: (1) They ignore the local context semantics when modeling temporal dependencies. (2) They lack the ability to capture the spatial dependencies of multiple patterns. To tackle such issues, we propose a novel Transformer-based model for multivariate time series forecasting, called the spatial–temporal convolutional Transformer network (STCTN). STCTN mainly consists of two novel attention mechanisms to respectively model temporal and spatial dependencies. Local-range convolutional attention mechanism is proposed in STCTN to simultaneously focus on both global and local context temporal dependencies at the sequence level, which addresses the first shortcoming. Group-range convolutional attention mechanism is designed to model multiple spatial dependency patterns at graph level, as well as reduce the computation and memory complexity, which addresses the second shortcoming. Continuous positional encoding is proposed to link the historical observations and predicted future values in positional encoding, which also improves the forecasting performance. Extensive experiments on six real-world datasets show that the proposed STCTN outperforms the start-of-the-art methods and is more robust to nonsmooth time series data. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

15 pages, 2632 KiB  
Article
Semantic Segmentation Based on Depth Background Blur
by Hao Li, Changjiang Liu and Anup Basu
Appl. Sci. 2022, 12(3), 1051; https://doi.org/10.3390/app12031051 - 20 Jan 2022
Cited by 3 | Viewed by 3251
Abstract
Deep convolutional neural networks (CNNs) are effective in image classification, and are widely used in image segmentation tasks. Several neural netowrks have achieved high accuracy in segementation on existing semantic datasets, for instance PASCAL VOC, CamVid, and Cityscapes. However, there are nearly no [...] Read more.
Deep convolutional neural networks (CNNs) are effective in image classification, and are widely used in image segmentation tasks. Several neural netowrks have achieved high accuracy in segementation on existing semantic datasets, for instance PASCAL VOC, CamVid, and Cityscapes. However, there are nearly no studies on semantic segmentation from the perspective of a dataset itself. In this paper, we analyzed the characteristics of datasets, and proposed a novel experimental strategy based on bokeh to weaken the impact of futile background information. This crucial bokeh module processed each image in the inference phase by selecting an opportune fuzzy factor σ, so that the attention of our network can focus on the categories of interest. Some networks based on fully convolutional networks (FCNs) were utilized to verify the effectiveness of our method. Extensive experiments demonstrate that our approach can generally improve the segmentation results on existing datasets, such as PASCAL VOC 2012 and CamVid. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

11 pages, 15140 KiB  
Communication
Neural Network-Enabled Flexible Pressure and Temperature Sensor with Honeycomb-like Architecture for Voice Recognition
by Yue Su, Kainan Ma, Xu Zhang and Ming Liu
Sensors 2022, 22(3), 759; https://doi.org/10.3390/s22030759 - 19 Jan 2022
Cited by 15 | Viewed by 3590
Abstract
Flexible pressure sensors have been studied as wearable voice-recognition devices to be utilized in human-machine interaction. However, the development of highly sensitive, skin-attachable, and comfortable sensing devices to achieve clear voice detection remains a considerable challenge. Herein, we present a wearable and flexible [...] Read more.
Flexible pressure sensors have been studied as wearable voice-recognition devices to be utilized in human-machine interaction. However, the development of highly sensitive, skin-attachable, and comfortable sensing devices to achieve clear voice detection remains a considerable challenge. Herein, we present a wearable and flexible pressure and temperature sensor with a sensitive response to vibration, which can accurately recognize the human voice by combing with the artificial neural network. The device consists of a polyethylene terephthalate (PET) printed with a silver electrode, a filament-microstructured polydimethylsiloxane (PDMS) film embedded with single-walled carbon nanotubes and a polyimide (PI) film sputtered with a patterned Ti/Pt thermistor strip. The developed pressure sensor exhibited a pressure sensitivity of 0.398 kPa1 in the low-pressure regime, and the fabricated temperature sensor shows a desirable temperature coefficient of resistance of 0.13% C in the range of 25 C to 105 C. Through training and testing the neural network model with the waveform data of the sensor obtained from human pronunciation, the vocal fold vibrations of different words can be successfully recognized, and the total recognition accuracy rate can reach 93.4%. Our results suggest that the fabricated sensor has substantial potential for application in the human-computer interface fields, such as voice control, vocal healthcare monitoring, and voice authentication. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

15 pages, 2139 KiB  
Article
MEMe: A Mutually Enhanced Modeling Method for Efficient and Effective Human Pose Estimation
by Jie Li, Zhixing Wang, Bo Qi, Jianlin Zhang and Hu Yang
Sensors 2022, 22(2), 632; https://doi.org/10.3390/s22020632 - 14 Jan 2022
Cited by 7 | Viewed by 3301
Abstract
In this paper, a mutually enhanced modeling method (MEMe) is presented for human pose estimation, which focuses on enhancing lightweight model performance, but with low complexity. To obtain higher accuracy, a traditional model scale is largely expanded with heavy deployment difficulties. However, for [...] Read more.
In this paper, a mutually enhanced modeling method (MEMe) is presented for human pose estimation, which focuses on enhancing lightweight model performance, but with low complexity. To obtain higher accuracy, a traditional model scale is largely expanded with heavy deployment difficulties. However, for a more lightweight model, there is a large performance gap compared to the former; thus, an urgent need for a way to fill it. Therefore, we propose a MEMe to reconstruct a lightweight baseline model, EffBase transferred intuitively from EfficientDet, into the efficient and effective pose (EEffPose) net, which contains three mutually enhanced modules: the Enhanced EffNet (EEffNet) backbone, the total fusion neck (TFNeck), and the final attention head (FAHead). Extensive experiments on COCO and MPII benchmarks show that our MEMe-based models reach state-of-the-art performances, with limited parameters. Specifically, in the same conditions, our EEffPose-P0 with 256 × 192 can use only 8.98 M parameters to achieve 75.4 AP on the COCO val set, which outperforms HRNet-W48, but with only 14% of its parameters. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

14 pages, 3391 KiB  
Article
Dynamic Noise Reduction with Deep Residual Shrinkage Networks for Online Fault Classification
by Alireza Salimy, Imene Mitiche, Philip Boreham, Alan Nesbitt and Gordon Morison
Sensors 2022, 22(2), 515; https://doi.org/10.3390/s22020515 - 10 Jan 2022
Cited by 5 | Viewed by 2899
Abstract
Fault signals in high-voltage (HV) power plant assets are captured using the electromagnetic interference (EMI) technique. The extracted EMI signals are taken under different conditions, introducing varying noise levels to the signals. The aim of this work is to address the varying noise [...] Read more.
Fault signals in high-voltage (HV) power plant assets are captured using the electromagnetic interference (EMI) technique. The extracted EMI signals are taken under different conditions, introducing varying noise levels to the signals. The aim of this work is to address the varying noise levels found in captured EMI fault signals, using a deep-residual-shrinkage-network (DRSN) that implements shrinkage methods with learned thresholds to carry out de-noising for classification, along with a time-frequency signal decomposition method for feature engineering of raw time-series signals. The approach will be to train and validate several alternative DRSN architectures with previously expertly labeled EMI fault signals, with architectures then being tested on previously unseen data, the signals used will firstly be de-noised and a controlled amount of noise will be added to the signals at various levels. DRSN architectures are assessed based on their testing accuracy in the varying controlled noise levels. Results show DRSN architectures using the newly proposed residual-shrinkage-building-unit-2 (RSBU-2) to outperform the residual-shrinkage-building-unit-1 (RSBU-1) architectures in low signal-to-noise ratios. The findings show that implementing thresholding methods in noise environments provides attractive results and their methods prove to work well with real-world EMI fault signals, proving them to be sufficient for real-world EMI fault classification and condition monitoring. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

27 pages, 12458 KiB  
Article
Self-Protected Virtual Sensor Network for Microcontroller Fault Detection
by German Sternharz, Jonas Skackauskas, Ayman Elhalwagy, Anthony J. Grichnik, Tatiana Kalganova and Md Nazmul Huda
Sensors 2022, 22(2), 454; https://doi.org/10.3390/s22020454 - 7 Jan 2022
Cited by 3 | Viewed by 2973
Abstract
This paper introduces a procedure to compare the functional behaviour of individual units of electronic hardware of the same type. The primary use case for this method is to estimate the functional integrity of an unknown device unit based on the behaviour of [...] Read more.
This paper introduces a procedure to compare the functional behaviour of individual units of electronic hardware of the same type. The primary use case for this method is to estimate the functional integrity of an unknown device unit based on the behaviour of a known and proven reference unit. This method is based on the so-called virtual sensor network (VSN) approach, where the output quantity of a physical sensor measurement is replicated by a virtual model output. In the present study, this approach is extended to model the functional behaviour of electronic hardware by a neural network (NN) with Long-Short-Term-Memory (LSTM) layers to encapsulate potential time-dependence of the signals. The proposed method is illustrated and validated on measurements from a remote-controlled drone, which is operated with two variants of controller hardware: a reference controller unit and a malfunctioning counterpart. It is demonstrated that the presented approach successfully identifies and describes the unexpected behaviour of the test device. In the presented case study, the model outputs a signal sample prediction in 0.14 ms and achieves a reconstruction accuracy of the validation data with a root mean square error (RMSE) below 0.04 relative to the data range. In addition, three self-protection features (multidimensional boundary-check, Mahalanobis distance, auxiliary autoencoder NN) are introduced to gauge the certainty of the VSN model output. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

14 pages, 3551 KiB  
Article
Detecting Extratropical Cyclones of the Northern Hemisphere with Single Shot Detector
by Minjing Shi, Pengfei He and Yuli Shi
Remote Sens. 2022, 14(2), 254; https://doi.org/10.3390/rs14020254 - 6 Jan 2022
Cited by 5 | Viewed by 2301
Abstract
In this paper, we propose a deep learning-based model to detect extratropical cyclones (ETCs) of the northern hemisphere, while developing a novel workflow of processing images and generating labels for ETCs. We first labeled the cyclone center by adapting an approach from Bonfanti [...] Read more.
In this paper, we propose a deep learning-based model to detect extratropical cyclones (ETCs) of the northern hemisphere, while developing a novel workflow of processing images and generating labels for ETCs. We first labeled the cyclone center by adapting an approach from Bonfanti et al. in 2017 and set up criteria of labeling ETCs of three categories: developing, mature, and declining stages. We then gave a framework of labeling and preprocessing the images in our dataset. Once the images and labels were ready to serve as inputs, an object detection model was built with Single Shot Detector (SSD) and adjusted to fit the format of the dataset. We trained and evaluated our model with our labeled dataset on two settings (binary and multiclass classifications), while keeping a record of the results. We found that the model achieves relatively high performance with detecting ETCs of mature stage (mean Average Precision is 86.64%), and an acceptable result for detecting ETCs of all three categories (mean Average Precision 79.34%). The single-shot detector model can succeed in detecting ETCs of different stages, and it has demonstrated great potential in the future applications of ETC detection in other relevant settings. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

13 pages, 1919 KiB  
Article
Human Motion Tracking Using 3D Image Features with a Long Short-Term Memory Mechanism Model—An Example of Forward Reaching
by Kai-Yu Chen, Li-Wei Chou, Hui-Min Lee, Shuenn-Tsong Young, Cheng-Hung Lin, Yi-Shu Zhou, Shih-Tsang Tang and Ying-Hui Lai
Sensors 2022, 22(1), 292; https://doi.org/10.3390/s22010292 - 31 Dec 2021
Cited by 7 | Viewed by 3941
Abstract
Human motion tracking is widely applied to rehabilitation tasks, and inertial measurement unit (IMU) sensors are a well-known approach for recording motion behavior. IMU sensors can provide accurate information regarding three-dimensional (3D) human motion. However, IMU sensors must be attached to the body, [...] Read more.
Human motion tracking is widely applied to rehabilitation tasks, and inertial measurement unit (IMU) sensors are a well-known approach for recording motion behavior. IMU sensors can provide accurate information regarding three-dimensional (3D) human motion. However, IMU sensors must be attached to the body, which can be inconvenient or uncomfortable for users. To alleviate this issue, a visual-based tracking system from two-dimensional (2D) RGB images has been studied extensively in recent years and proven to have a suitable performance for human motion tracking. However, the 2D image system has its limitations. Specifically, human motion consists of spatial changes, and the 3D motion features predicted from the 2D images have limitations. In this study, we propose a deep learning (DL) human motion tracking technology using 3D image features with a deep bidirectional long short-term memory (DBLSTM) mechanism model. The experimental results show that, compared with the traditional 2D image system, the proposed system provides improved human motion tracking ability with RMSE in acceleration less than 0.5 (m/s2) X, Y, and Z directions. These findings suggest that the proposed model is a viable approach for future human motion tracking applications. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

14 pages, 7758 KiB  
Article
A Two-Stage Approach to Important Area Detection in Gathering Place Using a Novel Multi-Input Attention Network
by Jianqiang Xu, Haoyu Zhao and Weidong Min
Sensors 2022, 22(1), 285; https://doi.org/10.3390/s22010285 - 31 Dec 2021
Viewed by 1832
Abstract
An important area in a gathering place is a region attracting the constant attention of people and has evident visual features, such as a flexible stage or an open-air show. Finding such areas can help security supervisors locate the abnormal regions automatically. The [...] Read more.
An important area in a gathering place is a region attracting the constant attention of people and has evident visual features, such as a flexible stage or an open-air show. Finding such areas can help security supervisors locate the abnormal regions automatically. The existing related methods lack an efficient means to find important area candidates from a scene and have failed to judge whether or not a candidate attracts people’s attention. To realize the detection of an important area, this study proposes a two-stage method with a novel multi-input attention network (MAN). The first stage, called important area candidate generation, aims to generate candidate important areas with an image-processing algorithm (i.e., K-means++, image dilation, median filtering, and the RLSA algorithm). The candidate areas can be selected automatically for further analysis. The second stage, called important area candidate classification, aims to detect an important area from candidates with MAN. In particular, MAN is designed as a multi-input network structure, which fuses global and local image features to judge whether or not an area attracts people’s attention. To enhance the representation of candidate areas, two modules (i.e., channel attention and spatial attention modules) are proposed on the basis of the attention mechanism. These modules are mainly based on multi-layer perceptron and pooling operation to reconstruct the image feature and provide considerably efficient representation. This study also contributes to a new dataset called gathering place important area detection for testing the proposed two-stage method. Lastly, experimental results show that the proposed method has good performance and can correctly detect an important area. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

20 pages, 5244 KiB  
Article
Implementation of a DPU-Based Intelligent Thermal Imaging Hardware Accelerator on FPGA
by Abdelrahman S. Hussein, Ahmed Anwar, Yasmine Fahmy, Hassan Mostafa, Khaled Nabil Salama and Mai Kafafy
Electronics 2022, 11(1), 105; https://doi.org/10.3390/electronics11010105 - 29 Dec 2021
Cited by 9 | Viewed by 3605
Abstract
Thermal imaging has many applications that all leverage from the heat map that can be constructed using this type of imaging. It can be used in Internet of Things (IoT) applications to detect the features of surroundings. In such a case, Deep Neural [...] Read more.
Thermal imaging has many applications that all leverage from the heat map that can be constructed using this type of imaging. It can be used in Internet of Things (IoT) applications to detect the features of surroundings. In such a case, Deep Neural Networks (DNNs) can be used to carry out many visual analysis tasks which can provide the system with the capacity to make decisions. However, due to their huge computational cost, such networks are recommended to exploit custom hardware platforms to accelerate their inference as well as reduce the overall energy consumption of the system. In this work, an energy adaptive system is proposed, which can intelligently configure itself based on the battery energy level. Besides achieving a maximum speed increase that equals 6.38×, the proposed system achieves significant energy that is reduced by 97.81% compared to a conventional general-purpose CPU. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

25 pages, 8215 KiB  
Article
Early Diagnosis of Multiple Sclerosis Using Swept-Source Optical Coherence Tomography and Convolutional Neural Networks Trained with Data Augmentation
by Almudena López-Dorado, Miguel Ortiz, María Satue, María J. Rodrigo, Rafael Barea, Eva M. Sánchez-Morla, Carlo Cavaliere, José M. Rodríguez-Ascariz, Elvira Orduna-Hospital, Luciano Boquete and Elena Garcia-Martin
Sensors 2022, 22(1), 167; https://doi.org/10.3390/s22010167 - 27 Dec 2021
Cited by 19 | Viewed by 4805
Abstract
Background: The aim of this paper is to implement a system to facilitate the diagnosis of multiple sclerosis (MS) in its initial stages. It does so using a convolutional neural network (CNN) to classify images captured with swept-source optical coherence tomography (SS-OCT). Methods: [...] Read more.
Background: The aim of this paper is to implement a system to facilitate the diagnosis of multiple sclerosis (MS) in its initial stages. It does so using a convolutional neural network (CNN) to classify images captured with swept-source optical coherence tomography (SS-OCT). Methods: SS-OCT images from 48 control subjects and 48 recently diagnosed MS patients have been used. These images show the thicknesses (45 × 60 points) of the following structures: complete retina, retinal nerve fiber layer, two ganglion cell layers (GCL+, GCL++) and choroid. The Cohen distance is used to identify the structures and the regions within them with greatest discriminant capacity. The original database of OCT images is augmented by a deep convolutional generative adversarial network to expand the CNN’s training set. Results: The retinal structures with greatest discriminant capacity are the GCL++ (44.99% of image points), complete retina (26.71%) and GCL+ (22.93%). Thresholding these images and using them as inputs to a CNN comprising two convolution modules and one classification module obtains sensitivity = specificity = 1.0. Conclusions: Feature pre-selection and the use of a convolutional neural network may be a promising, nonharmful, low-cost, easy-to-perform and effective means of assisting the early diagnosis of MS based on SS-OCT thickness data. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

12 pages, 1051 KiB  
Article
Scheduling PID Attitude and Position Control Frequencies for Time-Optimal Quadrotor Waypoint Tracking under Unknown External Disturbances
by Cheongwoong Kang, Bumjin Park and Jaesik Choi
Sensors 2022, 22(1), 150; https://doi.org/10.3390/s22010150 - 27 Dec 2021
Cited by 11 | Viewed by 3317
Abstract
Recently, the use of quadrotors has increased in numerous applications, such as agriculture, rescue, transportation, inspection, and localization. Time-optimal quadrotor waypoint tracking is defined as controlling quadrotors to follow the given waypoints as quickly as possible. Although PID control is widely used for [...] Read more.
Recently, the use of quadrotors has increased in numerous applications, such as agriculture, rescue, transportation, inspection, and localization. Time-optimal quadrotor waypoint tracking is defined as controlling quadrotors to follow the given waypoints as quickly as possible. Although PID control is widely used for quadrotor control, it is not adaptable to environmental changes, such as various trajectories and dynamic external disturbances. In this work, we discover that adjusting PID control frequencies is necessary for adapting to environmental changes by showing that the optimal control frequencies can be different for different environments. Therefore, we suggest a method to schedule the PID position and attitude control frequencies for time-optimal quadrotor waypoint tracking. The method includes (1) a Control Frequency Agent (CFA) that finds the best control frequencies in various environments, (2) a Quadrotor Future Predictor (QFP) that predicts the next state of a quadrotor, and (3) combining the CFA and QFP for time-optimal quadrotor waypoint tracking under unknown external disturbances. The experimental results prove the effectiveness of the proposed method by showing that it reduces the travel time of a quadrotor for waypoint tracking. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

17 pages, 7869 KiB  
Article
Deep Learning-Based Thermal Image Analysis for Pavement Defect Detection and Classification Considering Complex Pavement Conditions
by Cheng Chen, Sindhu Chandra, Yufan Han and Hyungjoon Seo
Remote Sens. 2022, 14(1), 106; https://doi.org/10.3390/rs14010106 - 27 Dec 2021
Cited by 60 | Viewed by 8241
Abstract
Automatic damage detection using deep learning warrants an extensive data source that captures complex pavement conditions. This paper proposes a thermal-RGB fusion image-based pavement damage detection model, wherein the fused RGB-thermal image is formed through multi-source sensor information to achieve fast and accurate [...] Read more.
Automatic damage detection using deep learning warrants an extensive data source that captures complex pavement conditions. This paper proposes a thermal-RGB fusion image-based pavement damage detection model, wherein the fused RGB-thermal image is formed through multi-source sensor information to achieve fast and accurate defect detection including complex pavement conditions. The proposed method uses pre-trained EfficientNet B4 as the backbone architecture and generates an argument dataset (containing non-uniform illumination, camera noise, and scales of thermal images too) to achieve high pavement damage detection accuracy. This paper tests separately the performance of different input data (RGB, thermal, MSX, and fused image) to test the influence of input data and network on the detection results. The results proved that the fused image’s damage detection accuracy can be as high as 98.34% and by using the dataset after augmentation, the detection model deems to be more stable to achieve 98.35% precision, 98.34% recall, and 98.34% F1-score. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

24 pages, 6442 KiB  
Review
Recent Development of Flexible Tactile Sensors and Their Applications
by Trong-Danh Nguyen and Jun Seop Lee
Sensors 2022, 22(1), 50; https://doi.org/10.3390/s22010050 - 22 Dec 2021
Cited by 56 | Viewed by 9897
Abstract
With the rapid development of society in recent decades, the wearable sensor has attracted attention for motion-based health care and artificial applications. However, there are still many limitations to applying them in real life, particularly the inconvenience that comes from their large size [...] Read more.
With the rapid development of society in recent decades, the wearable sensor has attracted attention for motion-based health care and artificial applications. However, there are still many limitations to applying them in real life, particularly the inconvenience that comes from their large size and non-flexible systems. To solve these problems, flexible small-sized sensors that use body motion as a stimulus are studied to directly collect more accurate and diverse signals. In particular, tactile sensors are applied directly on the skin and provide input signals of motion change for the flexible reading device. This review provides information about different types of tactile sensors and their working mechanisms that are piezoresistive, piezocapacitive, piezoelectric, and triboelectric. Moreover, this review presents not only the applications of the tactile sensor in motion sensing and health care monitoring, but also their contributions in the field of artificial intelligence in recent years. Other applications, such as human behavior studies, are also suggested. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 10184 KiB  
Article
Towards a More Realistic and Detailed Deep-Learning-Based Radar Echo Extrapolation Method
by Yuan Hu, Lei Chen, Zhibin Wang, Xiang Pan and Hao Li
Remote Sens. 2022, 14(1), 24; https://doi.org/10.3390/rs14010024 - 22 Dec 2021
Cited by 21 | Viewed by 7599
Abstract
Deep-learning-based radar echo extrapolation methods have achieved remarkable progress in the precipitation nowcasting field. However, they suffer from a common notorious problem—they tend to produce blurry predictions. Although some efforts have been made in recent years, the blurring problem is still under-addressed. In [...] Read more.
Deep-learning-based radar echo extrapolation methods have achieved remarkable progress in the precipitation nowcasting field. However, they suffer from a common notorious problem—they tend to produce blurry predictions. Although some efforts have been made in recent years, the blurring problem is still under-addressed. In this work, we propose three effective strategies to assist deep-learning-based radar echo extrapolation methods to achieve more realistic and detailed prediction. Specifically, we propose a spatial generative adversarial network (GAN) and a spectrum GAN to improve image fidelity. The spatial and spectrum GANs aim at penalizing the distribution discrepancy between generated and real images from the spatial domain and spectral domain, respectively. In addition, a masked style loss is devised to further enhance the details by transferring the detailed texture of ground truth radar sequences to extrapolated ones. We apply a foreground mask to prevent the background noise from transferring to the outputs. Moreover, we also design a new metric termed the power spectral density score (PSDS) to quantify the perceptual quality from a frequency perspective. The PSDS metric can be applied as a complement to other visual evaluation metrics (e.g., LPIPS) to achieve a comprehensive measurement of image sharpness. We test our approaches with both ConvLSTM baseline and U-Net baseline, and comprehensive ablation experiments on the SEVIR dataset show that the proposed approaches are able to produce much more realistic radar images than baselines. Most notably, our methods can be readily applied to any deep-learning-based spatiotemporal forecasting models to acquire more detailed results. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

31 pages, 2792 KiB  
Article
Pseudo-Labeling Optimization Based Ensemble Semi-Supervised Soft Sensor in the Process Industry
by Youwei Li, Huaiping Jin, Shoulong Dong, Biao Yang and Xiangguang Chen
Sensors 2021, 21(24), 8471; https://doi.org/10.3390/s21248471 - 19 Dec 2021
Cited by 6 | Viewed by 3488
Abstract
Nowadays, soft sensor techniques have become promising solutions for enabling real-time estimation of difficult-to-measure quality variables in industrial processes. However, labeled data are often scarce in many real-world applications, which poses a significant challenge when building accurate soft sensor models. Therefore, this paper [...] Read more.
Nowadays, soft sensor techniques have become promising solutions for enabling real-time estimation of difficult-to-measure quality variables in industrial processes. However, labeled data are often scarce in many real-world applications, which poses a significant challenge when building accurate soft sensor models. Therefore, this paper proposes a novel semi-supervised soft sensor method, referred to as ensemble semi-supervised negative correlation learning extreme learning machine (EnSSNCLELM), for industrial processes with limited labeled data. First, an improved supervised regression algorithm called NCLELM is developed, by integrating the philosophy of negative correlation learning into extreme learning machine (ELM). Then, with NCLELM as the base learning technique, a multi-learner pseudo-labeling optimization approach is proposed, by converting the estimation of pseudo labels as an explicit optimization problem, in order to obtain high-confidence pseudo-labeled data. Furthermore, a set of diverse semi-supervised NCLELM models (SSNCLELM) are developed from different enlarged labeled sets, which are obtained by combining the labeled and pseudo-labeled training data. Finally, those SSNCLELM models whose prediction accuracies were not worse than their supervised counterparts were combined using a stacking strategy. The proposed method can not only exploit both labeled and unlabeled data, but also combine the merits of semi-supervised and ensemble learning paradigms, thereby providing superior predictions over traditional supervised and semi-supervised soft sensor methods. The effectiveness and superiority of the proposed method were demonstrated through two chemical applications. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

15 pages, 2736 KiB  
Article
SalfMix: A Novel Single Image-Based Data Augmentation Technique Using a Saliency Map
by Jaehyeop Choi, Chaehyeon Lee, Donggyu Lee and Heechul Jung
Sensors 2021, 21(24), 8444; https://doi.org/10.3390/s21248444 - 17 Dec 2021
Cited by 12 | Viewed by 5260
Abstract
Modern data augmentation strategies such as Cutout, Mixup, and CutMix, have achieved good performance in image recognition tasks. Particularly, the data augmentation approaches, such as Mixup and CutMix, that mix two images to generate a mixed training image, could generalize convolutional neural networks [...] Read more.
Modern data augmentation strategies such as Cutout, Mixup, and CutMix, have achieved good performance in image recognition tasks. Particularly, the data augmentation approaches, such as Mixup and CutMix, that mix two images to generate a mixed training image, could generalize convolutional neural networks better than single image-based data augmentation approaches such as Cutout. We focus on the fact that the mixed image can improve generalization ability, and we wondered if it would be effective to apply it to a single image. Consequently, we propose a new data augmentation method to produce a self-mixed image based on a saliency map, called SalfMix. Furthermore, we combined SalfMix with state-of-the-art two images-based approaches, such as Mixup, SaliencyMix, and CutMix, to increase the performance, called HybridMix. The proposed SalfMix achieved better accuracies than Cutout, and HybridMix achieved state-of-the-art performance on three classification datasets: CIFAR-10, CIFAR-100, and TinyImageNet-200. Furthermore, HybridMix achieved the best accuracy in object detection tasks on the VOC dataset, in terms of mean average precision. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 4151 KiB  
Article
Wearable Devices, Smartphones, and Interpretable Artificial Intelligence in Combating COVID-19
by Haytham Hijazi, Manar Abu Talib, Ahmad Hasasneh, Ali Bou Nassif, Nafisa Ahmed and Qassim Nasir
Sensors 2021, 21(24), 8424; https://doi.org/10.3390/s21248424 - 17 Dec 2021
Cited by 23 | Viewed by 6122
Abstract
Physiological measures, such as heart rate variability (HRV) and beats per minute (BPM), can be powerful health indicators of respiratory infections. HRV and BPM can be acquired through widely available wrist-worn biometric wearables and smartphones. Successive abnormal changes in these indicators could potentially [...] Read more.
Physiological measures, such as heart rate variability (HRV) and beats per minute (BPM), can be powerful health indicators of respiratory infections. HRV and BPM can be acquired through widely available wrist-worn biometric wearables and smartphones. Successive abnormal changes in these indicators could potentially be an early sign of respiratory infections such as COVID-19. Thus, wearables and smartphones should play a significant role in combating COVID-19 through the early detection supported by other contextual data and artificial intelligence (AI) techniques. In this paper, we investigate the role of the heart measurements (i.e., HRV and BPM) collected from wearables and smartphones in demonstrating early onsets of the inflammatory response to the COVID-19. The AI framework consists of two blocks: an interpretable prediction model to classify the HRV measurements status (as normal or affected by inflammation) and a recurrent neural network (RNN) to analyze users’ daily status (i.e., textual logs in a mobile application). Both classification decisions are integrated to generate the final decision as either “potentially COVID-19 infected” or “no evident signs of infection”. We used a publicly available dataset, which comprises 186 patients with more than 3200 HRV readings and numerous user textual logs. The first evaluation of the approach showed an accuracy of 83.34 ± 1.68% with 0.91, 0.88, 0.89 precision, recall, and F1-Score, respectively, in predicting the infection two days before the onset of the symptoms supported by a model interpretation using the local interpretable model-agnostic explanations (LIME). Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

33 pages, 5325 KiB  
Article
Human Activity Recognition of Individuals with Lower Limb Amputation in Free-Living Conditions: A Pilot Study
by Alexander Jamieson, Laura Murray, Lina Stankovic, Vladimir Stankovic and Arjan Buis
Sensors 2021, 21(24), 8377; https://doi.org/10.3390/s21248377 - 15 Dec 2021
Cited by 8 | Viewed by 2946
Abstract
This pilot study aimed to investigate the implementation of supervised classifiers and a neural network for the recognition of activities carried out by Individuals with Lower Limb Amputation (ILLAs), as well as individuals without gait impairment, in free living conditions. Eight individuals with [...] Read more.
This pilot study aimed to investigate the implementation of supervised classifiers and a neural network for the recognition of activities carried out by Individuals with Lower Limb Amputation (ILLAs), as well as individuals without gait impairment, in free living conditions. Eight individuals with no gait impairments and four ILLAs wore a thigh-based accelerometer and walked on an improvised route in the vicinity of their homes across a variety of terrains. Various machine learning classifiers were trained and tested for recognition of walking activities. Additional investigations were made regarding the detail of the activity label versus classifier accuracy and whether the classifiers were capable of being trained exclusively on non-impaired individuals’ data and could recognize physical activities carried out by ILLAs. At a basic level of label detail, Support Vector Machines (SVM) and Long-Short Term Memory (LSTM) networks were able to acquire 77–78% mean classification accuracy, which fell with increased label detail. Classifiers trained on individuals without gait impairment could not recognize activities carried out by ILLAs. This investigation presents the groundwork for a HAR system capable of recognizing a variety of walking activities, both for individuals with no gait impairments and ILLAs. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

25 pages, 4471 KiB  
Article
A Hybrid Lightweight System for Early Attack Detection in the IoMT Fog
by Shilan S. Hameed, Ali Selamat, Liza Abdul Latiff, Shukor A. Razak, Ondrej Krejcar, Hamido Fujita, Mohammad Nazir Ahmad Sharif and Sigeru Omatu
Sensors 2021, 21(24), 8289; https://doi.org/10.3390/s21248289 - 11 Dec 2021
Cited by 15 | Viewed by 3551
Abstract
Cyber-attack detection via on-gadget embedded models and cloud systems are widely used for the Internet of Medical Things (IoMT). The former has a limited computation ability, whereas the latter has a long detection time. Fog-based attack detection is alternatively used to overcome these [...] Read more.
Cyber-attack detection via on-gadget embedded models and cloud systems are widely used for the Internet of Medical Things (IoMT). The former has a limited computation ability, whereas the latter has a long detection time. Fog-based attack detection is alternatively used to overcome these problems. However, the current fog-based systems cannot handle the ever-increasing IoMT’s big data. Moreover, they are not lightweight and are designed for network attack detection only. In this work, a hybrid (for host and network) lightweight system is proposed for early attack detection in the IoMT fog. In an adaptive online setting, six different incremental classifiers were implemented, namely a novel Weighted Hoeffding Tree Ensemble (WHTE), Incremental K-Nearest Neighbors (IKNN), Incremental Naïve Bayes (INB), Hoeffding Tree Majority Class (HTMC), Hoeffding Tree Naïve Bayes (HTNB), and Hoeffding Tree Naïve Bayes Adaptive (HTNBA). The system was benchmarked with seven heterogeneous sensors and a NetFlow data infected with nine types of recent attack. The results showed that the proposed system worked well on the lightweight fog devices with ~100% accuracy, a low detection time, and a low memory usage of less than 6 MiB. The single-criteria comparative analysis showed that the WHTE ensemble was more accurate and was less sensitive to the concept drift. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

25 pages, 6379 KiB  
Article
Specific Radar Recognition Based on Characteristics of Emitted Radio Waveforms Using Convolutional Neural Networks
by Jan Matuszewski and Dymitr Pietrow
Sensors 2021, 21(24), 8237; https://doi.org/10.3390/s21248237 - 9 Dec 2021
Cited by 16 | Viewed by 4302
Abstract
With the increasing complexity of the electromagnetic environment and continuous development of radar technology we can expect a large number of modern radars using agile waveforms to appear on the battlefield in the near future. Effectively identifying these radar signals in electronic warfare [...] Read more.
With the increasing complexity of the electromagnetic environment and continuous development of radar technology we can expect a large number of modern radars using agile waveforms to appear on the battlefield in the near future. Effectively identifying these radar signals in electronic warfare systems only by relying on traditional recognition models poses a serious challenge. In response to the above problem, this paper proposes a recognition method of emitted radar signals with agile waveforms based on the convolutional neural network (CNN). These signals are measured in the electronic recognition receivers and processed into digital data, after which they undergo recognition. The implementation of this system is presented in a simulation environment with the help of a signal generator that has the ability to make changes in signal signatures earlier recognized and written in the emitter database. This article contains a description of the software’s components, learning subsystem and signal generator. The problem of teaching neural networks with the use of the graphics processing units and the way of choosing the learning coefficients are also outlined. The correctness of the CNN operation was tested using a simulation environment that verified the operation’s effectiveness in a noisy environment and in conditions where many radar signals that interfere with each other are present. The effectiveness results of the applied solutions and the possibilities of developing the method of learning and processing algorithms are presented by means of tables and appropriate figures. The experimental results demonstrate that the proposed method can effectively solve the problem of recognizing raw radar signals with agile time waveforms, and achieve correct probability of recognition at the level of 92–99%. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

21 pages, 5672 KiB  
Article
Effect of a Recliner Chair with Rocking Motions on Sleep Efficiency
by Suwhan Baek, Hyunsoo Yu, Jongryun Roh, Jungnyun Lee, Illsoo Sohn, Sayup Kim and Cheolsoo Park
Sensors 2021, 21(24), 8214; https://doi.org/10.3390/s21248214 - 8 Dec 2021
Cited by 9 | Viewed by 5158
Abstract
In this study, we analyze the effect of a recliner chair with rocking motions on sleep quality of naps using automated sleep scoring and spindle detection models. The quality of sleep corresponding to the two rocking motions was measured quantitatively and qualitatively. For [...] Read more.
In this study, we analyze the effect of a recliner chair with rocking motions on sleep quality of naps using automated sleep scoring and spindle detection models. The quality of sleep corresponding to the two rocking motions was measured quantitatively and qualitatively. For the quantitative evaluation, we conducted a sleep parameter analysis based on the results of the estimated sleep stages obtained on the brainwave and spindle estimation, and a sleep survey assessment from the participants was analyzed for the qualitative evaluation. The analysis showed that sleep in the recliner chair with rocking motions positively increased the duration of the spindles and deep sleep stage, resulting in improved sleep quality. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

25 pages, 1848 KiB  
Article
Comparative Study of Markerless Vision-Based Gait Analyses for Person Re-Identification
by Jaerock Kwon, Yunju Lee and Jehyung Lee
Sensors 2021, 21(24), 8208; https://doi.org/10.3390/s21248208 - 8 Dec 2021
Cited by 7 | Viewed by 3893
Abstract
The model-based gait analysis of kinematic characteristics of the human body has been used to identify individuals. To extract gait features, spatiotemporal changes of anatomical landmarks of the human body in 3D were preferable. Without special lab settings, 2D images were easily acquired [...] Read more.
The model-based gait analysis of kinematic characteristics of the human body has been used to identify individuals. To extract gait features, spatiotemporal changes of anatomical landmarks of the human body in 3D were preferable. Without special lab settings, 2D images were easily acquired by monocular video cameras in real-world settings. The 2D and 3D locations of key joint positions were estimated by the 2D and 3D pose estimators. Then, the 3D joint positions can be estimated from the 2D image sequences in human gait. Yet, it has been challenging to have the exact gait features of a person due to viewpoint variance and occlusion of body parts in the 2D images. In the study, we conducted a comparative study of two different approaches: feature-based and spatiotemporal-based viewpoint invariant person re-identification using gait patterns. The first method is to use gait features extracted from time-series 3D joint positions to identify an individual. The second method uses a neural network, a Siamese Long Short Term Memory (LSTM) network with the 3D spatiotemporal changes of key joint positions in a gait cycle to classify an individual without extracting gait features. To validate and compare these two methods, we conducted experiments with two open datasets of the MARS and CASIA-A datasets. The results show that the Siamese LSTM outperforms the gait feature-based approaches on the MARS dataset by 20% and 55% on the CASIA-A dataset. The results show that feature-based gait analysis using 2D and 3D pose estimators is premature. As a future study, we suggest developing large-scale human gait datasets and designing accurate 2D and 3D joint position estimators specifically for gait patterns. We expect that the current comparative study and the future work could contribute to rehabilitation study, forensic gait analysis and early detection of neurological disorders. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 16322 KiB  
Article
A Novel Hybrid NN-ABPE-Based Calibration Method for Improving Accuracy of Lateration Positioning System
by Milica Petrović, Maciej Ciężkowski, Sławomir Romaniuk, Adam Wolniakowski and Zoran Miljković
Sensors 2021, 21(24), 8204; https://doi.org/10.3390/s21248204 - 8 Dec 2021
Cited by 5 | Viewed by 2600
Abstract
Positioning systems based on the lateration method utilize distance measurements and the knowledge of the location of the beacons to estimate the position of the target object. Although most of the global positioning techniques rely on beacons whose locations are known a priori [...] Read more.
Positioning systems based on the lateration method utilize distance measurements and the knowledge of the location of the beacons to estimate the position of the target object. Although most of the global positioning techniques rely on beacons whose locations are known a priori, miscellaneous factors and disturbances such as obstacles, reflections, signal propagation speed, the orientation of antennas, measurement offsets of the beacons hardware, electromagnetic noise, or delays can affect the measurement accuracy. In this paper, we propose a novel hybrid calibration method based on Neural Networks (NN) and Apparent Beacon Position Estimation (ABPE) to improve the accuracy of a lateration positioning system. The main idea of the proposed method is to use a two-step position correction pipeline that first performs the ABPE step to estimate the perceived positions of the beacons that are used in the standard position estimation algorithm and then corrects these initial estimates by filtering them with a multi-layer feed-forward neural network in the second step. In order to find an optimal neural network, 16 NN architectures with 10 learning algorithms and 12 different activation functions for hidden layers were implemented and tested in the MATLAB environment. The best training outcomes for NNs were then employed in two real-world indoor scenarios: without and with obstacles. With the aim to validate the proposed methodology in a scenario where a fast set-up of the system is desired, we tested eight different uniform sampling patterns to establish the influence of the number of the training samples on the accuracy of the system. The experimental results show that the proposed hybrid NN-ABPE method can achieve a high level of accuracy even in scenarios when a small number of calibration reference points are measured. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

25 pages, 2170 KiB  
Article
A Hybrid Vision Processing Unit with a Pipelined Workflow for Convolutional Neural Network Accelerating and Image Signal Processing
by Peng Liu and Yan Song
Electronics 2021, 10(23), 2989; https://doi.org/10.3390/electronics10232989 - 1 Dec 2021
Cited by 5 | Viewed by 2723
Abstract
Vision processing chips have been widely used in image processing and recognition tasks. They are conventionally designed based on the image signal processing (ISP) units directly connected with the sensors. In recent years, convolutional neural networks (CNNs) have become the dominant tools for [...] Read more.
Vision processing chips have been widely used in image processing and recognition tasks. They are conventionally designed based on the image signal processing (ISP) units directly connected with the sensors. In recent years, convolutional neural networks (CNNs) have become the dominant tools for many state-of-the-art vision processing tasks. However, CNNs cannot be processed by a conventional vision processing unit (VPU) with a high speed. On the other side, the CNN processing units cannot process the RAW images from the sensors directly and an ISP unit is required. This makes a vision system inefficient with a lot of data transmission and redundant hardware resources. Additionally, many CNN processing units suffer from a low flexibility for various CNN operations. To solve this problem, this paper proposed an efficient vision processing unit based on a hybrid processing elements array for both CNN accelerating and ISP. Resources are highly shared in this VPU, and a pipelined workflow is introduced to accelerate the vision tasks. We implement the proposed VPU on the Field-Programmable Gate Array (FPGA) platform and various vision tasks are tested on it. The results show that this VPU achieves a high efficiency for both CNN processing and ISP and shows a significant reduction in energy consumption for vision tasks consisting of CNNs and ISP. For various CNN tasks, it maintains an average multiply accumulator utilization of over 94% and achieves a performance of 163.2 GOPS with a frequency of 200 MHz. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

16 pages, 1516 KiB  
Article
The Whale Optimization Algorithm Approach for Deep Neural Networks
by Andrzej Brodzicki, Michał Piekarski and Joanna Jaworek-Korjakowska
Sensors 2021, 21(23), 8003; https://doi.org/10.3390/s21238003 - 30 Nov 2021
Cited by 52 | Viewed by 7204
Abstract
One of the biggest challenge in the field of deep learning is the parameter selection and optimization process. In recent years different algorithms have been proposed including bio-inspired solutions to solve this problem, however, there are many challenges including local minima, saddle points, [...] Read more.
One of the biggest challenge in the field of deep learning is the parameter selection and optimization process. In recent years different algorithms have been proposed including bio-inspired solutions to solve this problem, however, there are many challenges including local minima, saddle points, and vanishing gradients. In this paper, we introduce the Whale Optimisation Algorithm (WOA) based on the swarm foraging behavior of humpback whales to optimise neural network hyperparameters. We wish to stress that to the best of our knowledge this is the first attempt that uses Whale Optimisation Algorithm for the optimisation task of hyperparameters. After a detailed description of the WOA algorithm we formulate and explain the application in deep learning, present the implementation, and compare the proposed algorithm with other well-known algorithms including widely used Grid and Random Search methods. Additionally, we have implemented a third dimension feature analysis to the original WOA algorithm to utilize 3D search space (3D-WOA). Simulations show that the proposed algorithm can be successfully used for hyperparameters optimization, achieving accuracy of 89.85% and 80.60% for Fashion MNIST and Reuters datasets, respectively. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

18 pages, 5848 KiB  
Article
Recognition of Noisy Radar Emitter Signals Using a One-Dimensional Deep Residual Shrinkage Network
by Shengli Zhang, Jifei Pan, Zhenzhong Han and Linqing Guo
Sensors 2021, 21(23), 7973; https://doi.org/10.3390/s21237973 - 29 Nov 2021
Cited by 8 | Viewed by 2505
Abstract
Signal features can be obscured in noisy environments, resulting in low accuracy of radar emitter signal recognition based on traditional methods. To improve the ability of learning features from noisy signals, a new radar emitter signal recognition method based on one-dimensional (1D) deep [...] Read more.
Signal features can be obscured in noisy environments, resulting in low accuracy of radar emitter signal recognition based on traditional methods. To improve the ability of learning features from noisy signals, a new radar emitter signal recognition method based on one-dimensional (1D) deep residual shrinkage network (DRSN) is proposed, which offers the following advantages: (i) Unimportant features are eliminated using the soft thresholding function, and the thresholds are automatically set based on the attention mechanism; (ii) without any professional knowledge of signal processing or dimension conversion of data, the 1D DRSN can automatically learn the features characterizing the signal directly from the 1D data and achieve a high recognition rate for noisy signals. The effectiveness of the 1D DRSN was experimentally verified under different types of noise. In addition, comparison with other deep learning methods revealed the superior performance of the DRSN. Last, the mechanism of eliminating redundant features using the soft thresholding function was analyzed. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

18 pages, 3328 KiB  
Article
Estimation of Knee Joint Extension Force Using Mechanomyography Based on IGWO-SVR Algorithm
by Zebin Li, Lifu Gao, Wei Lu, Daqing Wang, Chenlei Xie and Huibin Cao
Electronics 2021, 10(23), 2972; https://doi.org/10.3390/electronics10232972 - 29 Nov 2021
Cited by 6 | Viewed by 2126
Abstract
Muscle force is an important physiological parameter of the human body. Accurate estimation of the muscle force can improve the stability and flexibility of lower limb joint auxiliary equipment. Nevertheless, the existing force estimation methods can neither satisfy the accuracy requirement nor ensure [...] Read more.
Muscle force is an important physiological parameter of the human body. Accurate estimation of the muscle force can improve the stability and flexibility of lower limb joint auxiliary equipment. Nevertheless, the existing force estimation methods can neither satisfy the accuracy requirement nor ensure the validity of estimation results. It is a very challenging task that needs to be solved. Among many optimization algorithms, gray wolf optimization (GWO) is widely used to find the optimal parameters of the regression model because of its superior optimization ability. Due to the traditional GWO being prone to fall into local optimum, a new nonlinear convergence factor and a new position update strategy are employed to balance local and global search capability. In this paper, an improved gray wolf optimization (IGWO) algorithm to optimize the support vector regression (SVR) is developed to estimate knee joint extension force accurately and timely. Firstly, mechanomyography (MMG) of the lower limb is measured by acceleration sensors during leg isometric muscle contractions extension training. Secondly, root mean square (RMS), mean absolute value (MAV), zero crossing (ZC), mean power frequency (MPF), and sample entropy (SE) of the MMG are extracted to construct feature sets as candidate data sets for regression analysis. Lastly, the features are fed into IGWO-SVR for further training. Experiments demonstrate that the IGWO-SVR provides the best performance indexes in the estimation of knee joint extension force in terms of RMSE, MAPE, and R compared with the other state-of-art models. These results are expected to become the most effective as guidance for rehabilitation training, muscle disease diagnosis, and health evaluation. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

16 pages, 4013 KiB  
Article
One Spatio-Temporal Sharpening Attention Mechanism for Light-Weight YOLO Models Based on Sharpening Spatial Attention
by Mengfan Xue, Minghao Chen, Dongliang Peng, Yunfei Guo and Huajie Chen
Sensors 2021, 21(23), 7949; https://doi.org/10.3390/s21237949 - 28 Nov 2021
Cited by 16 | Viewed by 3868
Abstract
Attention mechanisms have demonstrated great potential in improving the performance of deep convolutional neural networks (CNNs). However, many existing methods dedicate to developing channel or spatial attention modules for CNNs with lots of parameters, and complex attention modules inevitably affect the performance of [...] Read more.
Attention mechanisms have demonstrated great potential in improving the performance of deep convolutional neural networks (CNNs). However, many existing methods dedicate to developing channel or spatial attention modules for CNNs with lots of parameters, and complex attention modules inevitably affect the performance of CNNs. During our experiments of embedding Convolutional Block Attention Module (CBAM) in light-weight model YOLOv5s, CBAM does influence the speed and increase model complexity while reduce the average precision, but Squeeze-and-Excitation (SE) has a positive impact in the model as part of CBAM. To replace the spatial attention module in CBAM and offer a suitable scheme of channel and spatial attention modules, this paper proposes one Spatio-temporal Sharpening Attention Mechanism (SSAM), which sequentially infers intermediate maps along channel attention module and Sharpening Spatial Attention (SSA) module. By introducing sharpening filter in spatial attention module, we propose SSA module with low complexity. We try to find a scheme to combine our SSA module with SE module or Efficient Channel Attention (ECA) module and show best improvement in models such as YOLOv5s and YOLOv3-tiny. Therefore, we perform various replacement experiments and offer one best scheme that is to embed channel attention modules in backbone and neck of the model and integrate SSAM into YOLO head. We verify the positive effect of our SSAM on two general object detection datasets VOC2012 and MS COCO2017. One for obtaining a suitable scheme and the other for proving the versatility of our method in complex scenes. Experimental results on the two datasets show obvious promotion in terms of average precision and detection performance, which demonstrates the usefulness of our SSAM in light-weight YOLO models. Furthermore, visualization results also show the advantage of enhancing positioning ability with our SSAM. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

14 pages, 2632 KiB  
Article
Spherically Stratified Point Projection: Feature Image Generation for Object Classification Using 3D LiDAR Data
by Chulhee Bae, Yu-Cheol Lee, Wonpil Yu and Sejin Lee
Sensors 2021, 21(23), 7860; https://doi.org/10.3390/s21237860 - 25 Nov 2021
Cited by 2 | Viewed by 2922
Abstract
Three-dimensional point clouds have been utilized and studied for the classification of objects at the environmental level. While most existing studies, such as those in the field of computer vision, have detected object type from the perspective of sensors, this study developed a [...] Read more.
Three-dimensional point clouds have been utilized and studied for the classification of objects at the environmental level. While most existing studies, such as those in the field of computer vision, have detected object type from the perspective of sensors, this study developed a specialized strategy for object classification using LiDAR data points on the surface of the object. We propose a method for generating a spherically stratified point projection (sP2) feature image that can be applied to existing image-classification networks by performing pointwise classification based on a 3D point cloud using only LiDAR sensors data. The sP2’s main engine performs image generation through spherical stratification, evidence collection, and channel integration. Spherical stratification categorizes neighboring points into three layers according to distance ranges. Evidence collection calculates the occupancy probability based on Bayes’ rule to project 3D points onto a two-dimensional surface corresponding to each stratified layer. Channel integration generates sP2 RGB images with three evidence values representing short, medium, and long distances. Finally, the sP2 images are used as a trainable source for classifying the points into predefined semantic labels. Experimental results indicated the effectiveness of the proposed sP2 in classifying feature images generated using the LeNet architecture. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

23 pages, 1934 KiB  
Article
A Domain-Independent Generative Adversarial Network for Activity Recognition Using WiFi CSI Data
by Augustinas Zinys, Bram van Berlo and Nirvana Meratnia
Sensors 2021, 21(23), 7852; https://doi.org/10.3390/s21237852 - 25 Nov 2021
Cited by 12 | Viewed by 3471
Abstract
Over the past years, device-free sensing has received considerable attention due to its unobtrusiveness. In this regard, context recognition using WiFi Channel State Information (CSI) data has gained popularity, and various techniques have been proposed that combine unobtrusive sensing and deep learning to [...] Read more.
Over the past years, device-free sensing has received considerable attention due to its unobtrusiveness. In this regard, context recognition using WiFi Channel State Information (CSI) data has gained popularity, and various techniques have been proposed that combine unobtrusive sensing and deep learning to accurately detect various contexts ranging from human activities to gestures. However, research has shown that the performance of these techniques significantly degrades due to change in various factors including sensing environment, data collection configuration, diversity of target subjects, and target learning task (e.g., activities, gestures, emotions, vital signs). This problem, generally known as the domain change problem, is typically addressed by collecting more data and learning the data distribution that covers multiple factors impacting the performance. However, activity recognition data collection is a very labor-intensive and time consuming task, and there are too many known and unknown factors impacting WiFi CSI signals. In this paper, we propose a domain-independent generative adversarial network for WiFi CSI based activity recognition in combination with a simplified data pre-processing module. Our evaluation results show superiority of our proposed approach compared to the state of the art in terms of increased robustness against domain change, higher accuracy of activity recognition, and reduced model complexity. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

12 pages, 4525 KiB  
Article
Self-Supervised Denoising Image Filter Based on Recursive Deep Neural Network Structure
by Changhee Kang and Sang-ug Kang
Sensors 2021, 21(23), 7827; https://doi.org/10.3390/s21237827 - 24 Nov 2021
Cited by 2 | Viewed by 2374
Abstract
The purpose of this paper is to propose a novel noise removal method based on deep neural networks that can remove various types of noise without paired noisy and clean data. Because this type of filter generally has relatively poor performance, the proposed [...] Read more.
The purpose of this paper is to propose a novel noise removal method based on deep neural networks that can remove various types of noise without paired noisy and clean data. Because this type of filter generally has relatively poor performance, the proposed noise-to-blur-estimated clean (N2BeC) model introduces a stage-dependent loss function and a recursive learning stage for improved denoised image quality. The proposed loss function regularizes the existing loss function so that the proposed model can better learn image details. Moreover, the recursive learning stage provides the proposed model with an additional opportunity to learn image details. The overall deep neural network consists of three learning stages and three corresponding loss functions. We determine the essential hyperparameters via several simulations. Consequently, the proposed model showed more than 1 dB superior performance compared with the existing noise-to-blur model. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 513 KiB  
Article
Sensor-Based Human Activity Recognition Using Adaptive Class Hierarchy
by Kazuma Kondo and Tatsuhito Hasegawa
Sensors 2021, 21(22), 7743; https://doi.org/10.3390/s21227743 - 21 Nov 2021
Cited by 5 | Viewed by 3312
Abstract
In sensor-based human activity recognition, many methods based on convolutional neural networks (CNNs) have been proposed. In the typical CNN-based activity recognition model, each class is treated independently of others. However, actual activity classes often have hierarchical relationships. It is important to consider [...] Read more.
In sensor-based human activity recognition, many methods based on convolutional neural networks (CNNs) have been proposed. In the typical CNN-based activity recognition model, each class is treated independently of others. However, actual activity classes often have hierarchical relationships. It is important to consider an activity recognition model that uses the hierarchical relationship among classes to improve recognition performance. In image recognition, branch CNNs (B-CNNs) have been proposed for classification using class hierarchies. B-CNNs can easily perform classification using hand-crafted class hierarchies, but it is difficult to manually design an appropriate class hierarchy when the number of classes is large or there is little prior knowledge. Therefore, in our study, we propose a class hierarchy-adaptive B-CNN, which adds a method to the B-CNN for automatically constructing class hierarchies. Our method constructs the class hierarchy from training data automatically to effectively train the B-CNN without prior knowledge. We evaluated our method on several benchmark datasets for activity recognition. As a result, our method outperformed standard CNN models without considering the hierarchical relationship among classes. In addition, we confirmed that our method has performance comparable to a B-CNN model with a class hierarchy based on human prior knowledge. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

33 pages, 2634 KiB  
Review
Dragonfly Algorithm and Its Hybrids: A Survey on Performance, Objectives and Applications
by Bibi Aamirah Shafaa Emambocus, Muhammed Basheer Jasser, Aida Mustapha and Angela Amphawan
Sensors 2021, 21(22), 7542; https://doi.org/10.3390/s21227542 - 13 Nov 2021
Cited by 29 | Viewed by 4486
Abstract
Swarm intelligence is a discipline which makes use of a number of agents for solving optimization problems by producing low cost, fast and robust solutions. The dragonfly algorithm (DA), a recently proposed swarm intelligence algorithm, is inspired by the dynamic and static swarming [...] Read more.
Swarm intelligence is a discipline which makes use of a number of agents for solving optimization problems by producing low cost, fast and robust solutions. The dragonfly algorithm (DA), a recently proposed swarm intelligence algorithm, is inspired by the dynamic and static swarming behaviors of dragonflies, and it has been found to have a higher performance in comparison to other swarm intelligence and evolutionary algorithms in numerous applications. There are only a few surveys about the dragonfly algorithm, and we have found that they are limited in certain aspects. Hence, in this paper, we present a more comprehensive survey about DA, its applications in various domains, and its performance as compared to other swarm intelligence algorithms. We also analyze the hybrids of DA, the methods they employ to enhance the original DA, their performance as compared to the original DA, and their limitations. Moreover, we categorize the hybrids of DA according to the type of problem that they have been applied to, their objectives, and the methods that they utilize. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

18 pages, 12262 KiB  
Article
Memory-Based Pruning of Deep Neural Networks for IoT Devices Applied to Flood Detection
by Francisco Erivaldo Fernandes Junior, Luis Gustavo Nonato, Caetano Mazzoni Ranieri and Jó Ueyama
Sensors 2021, 21(22), 7506; https://doi.org/10.3390/s21227506 - 12 Nov 2021
Cited by 9 | Viewed by 2830
Abstract
Automatic flood detection may be an important component for triggering damage control systems and minimizing the risk of social or economic impacts caused by flooding. Riverside images from regular cameras are a widely available resource that can be used for tackling this problem. [...] Read more.
Automatic flood detection may be an important component for triggering damage control systems and minimizing the risk of social or economic impacts caused by flooding. Riverside images from regular cameras are a widely available resource that can be used for tackling this problem. Nevertheless, state-of-the-art neural networks, the most suitable approach for this type of computer vision task, are usually resource-consuming, which poses a challenge for deploying these models within low-capability Internet of Things (IoT) devices with unstable internet connections. In this work, we propose a deep neural network (DNN) architecture pruning algorithm capable of finding a pruned version of a given DNN within a user-specified memory footprint. Our results demonstrate that our proposed algorithm can find a pruned DNN model with the specified memory footprint with little to no degradation of its segmentation performance. Finally, we show that our algorithm can be used in a memory-constraint wireless sensor network (WSN) employed to detect flooding events of urban rivers, and the resulting pruned models have competitive results compared with the original models. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

16 pages, 4648 KiB  
Article
Infer Thermal Information from Visual Information: A Cross Imaging Modality Edge Learning (CIMEL) Framework
by Shuozhi Wang, Jianqiang Mei, Lichao Yang and Yifan Zhao
Sensors 2021, 21(22), 7471; https://doi.org/10.3390/s21227471 - 10 Nov 2021
Cited by 3 | Viewed by 2194
Abstract
The measurement accuracy and reliability of thermography is largely limited by a relatively low spatial-resolution of infrared (IR) cameras in comparison to digital cameras. Using a high-end IR camera to achieve high spatial-resolution can be costly or sometimes infeasible due to the high [...] Read more.
The measurement accuracy and reliability of thermography is largely limited by a relatively low spatial-resolution of infrared (IR) cameras in comparison to digital cameras. Using a high-end IR camera to achieve high spatial-resolution can be costly or sometimes infeasible due to the high sample rate required. Therefore, there is a strong demand to improve the quality of IR images, particularly on edges, without upgrading the hardware in the context of surveillance and industrial inspection systems. This paper proposes a novel Conditional Generative Adversarial Networks (CGAN)-based framework to enhance IR edges by learning high-frequency features from corresponding visual images. A dual-discriminator, focusing on edge and content/background, is introduced to guide the cross imaging modality learning procedure of the U-Net generator in high and low frequencies respectively. Results demonstrate that the proposed framework can effectively enhance barely visible edges in IR images without introducing artefacts, meanwhile the content information is well preserved. Different from most similar studies, this method only requires IR images for testing, which will increase the applicability of some scenarios where only one imaging modality is available, such as active thermography. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

13 pages, 7768 KiB  
Article
Model-Independent Lens Distortion Correction Based on Sub-Pixel Phase Encoding
by Pengbo Xiong, Shaokai Wang, Weibo Wang, Qixin Ye and Shujiao Ye
Sensors 2021, 21(22), 7465; https://doi.org/10.3390/s21227465 - 10 Nov 2021
Cited by 7 | Viewed by 3451
Abstract
Lens distortion can introduce deviations in visual measurement and positioning. The distortion can be minimized by optimizing the lens and selecting high-quality optical glass, but it cannot be completely eliminated. Most existing correction methods are based on accurate distortion models and stable image [...] Read more.
Lens distortion can introduce deviations in visual measurement and positioning. The distortion can be minimized by optimizing the lens and selecting high-quality optical glass, but it cannot be completely eliminated. Most existing correction methods are based on accurate distortion models and stable image characteristics. However, the distortion is usually a mixture of the radial distortion and the tangential distortion of the lens group, which makes it difficult for the mathematical model to accurately fit the non-uniform distortion. This paper proposes a new model-independent lens complex distortion correction method. Taking the horizontal and vertical stripe pattern as the calibration target, the sub-pixel value distribution visualizes the image distortion, and the correction parameters are directly obtained from the pixel distribution. A quantitative evaluation method suitable for model-independent methods is proposed. The method only calculates the error based on the characteristic points of the corrected picture itself. Experiments show that this method can accurately correct distortion with only 8 pictures, with an error of 0.39 pixels, which provides a simple method for complex lens distortion correction. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

19 pages, 13317 KiB  
Article
Towards Autonomous Drone Racing without GPU Using an OAK-D Smart Camera
by Leticia Oyuki Rojas-Perez and Jose Martinez-Carranza
Sensors 2021, 21(22), 7436; https://doi.org/10.3390/s21227436 - 9 Nov 2021
Cited by 12 | Viewed by 5696
Abstract
Recent advances have shown for the first time that it is possible to beat a human with an autonomous drone in a drone race. However, this solution relies heavily on external sensors, specifically on the use of a motion capture system. Thus, a [...] Read more.
Recent advances have shown for the first time that it is possible to beat a human with an autonomous drone in a drone race. However, this solution relies heavily on external sensors, specifically on the use of a motion capture system. Thus, a truly autonomous solution demands performing computationally intensive tasks such as gate detection, drone localisation, and state estimation. To this end, other solutions rely on specialised hardware such as graphics processing units (GPUs) whose onboard hardware versions are not as powerful as those available for desktop and server computers. An alternative is to combine specialised hardware with smart sensors capable of processing specific tasks on the chip, alleviating the need for the onboard processor to perform these computations. Motivated by this, we present the initial results of adapting a novel smart camera, known as the OpenCV AI Kit or OAK-D, as part of a solution for the ADR running entirely on board. This smart camera performs neural inference on the chip that does not use a GPU. It can also perform depth estimation with a stereo rig and run neural network models using images from a 4K colour camera as the input. Additionally, seeking to limit the payload to 200 g, we present a new 3D-printed design of the camera’s back case, reducing the original weight 40%, thus enabling the drone to carry it in tandem with a host onboard computer, the Intel Stick compute, where we run a controller based on gate detection. The latter is performed with a neural model running on an OAK-D at an operation frequency of 40 Hz, enabling the drone to fly at a speed of 2 m/s. We deem these initial results promising toward the development of a truly autonomous solution that will run intensive computational tasks fully on board. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

21 pages, 14840 KiB  
Article
A Hybrid Deep Learning Model for Recognizing Actions of Distracted Drivers
by Shuang-Jian Jiao, Lin-Yao Liu and Qian Liu
Sensors 2021, 21(21), 7424; https://doi.org/10.3390/s21217424 - 8 Nov 2021
Cited by 8 | Viewed by 3835
Abstract
With the rapid spreading of in-vehicle information systems such as smartphones, navigation systems, and radios, the number of traffic accidents caused by driver distractions shows an increasing trend. Timely identification and warning are deemed to be crucial for distracted driving and the establishment [...] Read more.
With the rapid spreading of in-vehicle information systems such as smartphones, navigation systems, and radios, the number of traffic accidents caused by driver distractions shows an increasing trend. Timely identification and warning are deemed to be crucial for distracted driving and the establishment of driver assistance systems is of great value. However, almost all research on the recognition of the driver’s distracted actions using computer vision methods neglected the importance of temporal information for action recognition. This paper proposes a hybrid deep learning model for recognizing the actions of distracted drivers. Specifically, we used OpenPose to obtain skeleton information of the human body and then constructed the vector angle and modulus ratio of the human body structure as features to describe the driver’s actions, thereby realizing the fusion of deep network features and artificial features, which improve the information density of spatial features. The K-means clustering algorithm was used to preselect the original frames, and the method of inter-frame comparison was used to obtain the final keyframe sequence by comparing the Euclidean distance between manually constructed vectors representing frames and the vector representing the cluster center. Finally, we constructed a two-layer long short-term memory neural network to obtain more effective spatiotemporal features, and one softmax layer to identify the distracted driver’s action. The experimental results based on the collected dataset prove the effectiveness of this framework, and it can provide a theoretical basis for the establishment of vehicle distraction warning systems. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

18 pages, 15319 KiB  
Article
TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
by Shoaib Ahmed Siddiqui, Dominique Mercier, Andreas Dengel and Sheraz Ahmed
Sensors 2021, 21(21), 7373; https://doi.org/10.3390/s21217373 - 5 Nov 2021
Cited by 7 | Viewed by 2664
Abstract
With the rise in the employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time series data has been neglected, with only a handful of [...] Read more.
With the rise in the employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time series data has been neglected, with only a handful of methods tested due to their poor intelligibility. We approach the problem of interpretability in a novel way by proposing TSInsight, where we attach an auto-encoder to the classifier with a sparsity-inducing norm on its output and fine-tune it based on the gradients from the classifier and a reconstruction penalty. TSInsight learns to preserve features that are important for prediction by the classifier and suppresses those that are irrelevant, i.e., serves as a feature attribution method to boost the interpretability. In contrast to most other attribution frameworks, TSInsight is capable of generating both instance-based and model-based explanations. We evaluated TSInsight along with nine other commonly used attribution methods on eight different time series datasets to validate its efficacy. The evaluation results show that TSInsight naturally achieves output space contraction; therefore, it is an effective tool for the interpretability of deep time series models. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

22 pages, 632 KiB  
Article
Time Series Segmentation Based on Stationarity Analysis to Improve New Samples Prediction
by Ricardo Petri Silva, Bruno Bogaz Zarpelão, Alberto Cano and Sylvio Barbon Junior
Sensors 2021, 21(21), 7333; https://doi.org/10.3390/s21217333 - 4 Nov 2021
Cited by 17 | Viewed by 4536
Abstract
A wide range of applications based on sequential data, named time series, have become increasingly popular in recent years, mainly those based on the Internet of Things (IoT). Several different machine learning algorithms exploit the patterns extracted from sequential data to support multiple [...] Read more.
A wide range of applications based on sequential data, named time series, have become increasingly popular in recent years, mainly those based on the Internet of Things (IoT). Several different machine learning algorithms exploit the patterns extracted from sequential data to support multiple tasks. However, this data can suffer from unreliable readings that can lead to low accuracy models due to the low-quality training sets available. Detecting the change point between high representative segments is an important ally to find and thread biased subsequences. By constructing a framework based on the Augmented Dickey-Fuller (ADF) test for data stationarity, two proposals to automatically segment subsequences in a time series were developed. The former proposal, called Change Detector segmentation, relies on change detection methods of data stream mining. The latter, called ADF-based segmentation, is constructed on a new change detector derived from the ADF test only. Experiments over real-file IoT databases and benchmarks showed the improvement provided by our proposals for prediction tasks with traditional Autoregressive integrated moving average (ARIMA) and Deep Learning (Long short-term memory and Temporal Convolutional Networks) methods. Results obtained by the Long short-term memory predictive model reduced the relative prediction error from 1 to 0.67, compared to time series without segmentation. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

12 pages, 633 KiB  
Article
Lossless Compression of Sensor Signals Using an Untrained Multi-Channel Recurrent Neural Predictor
by Qianhao Chen, Wenqi Wu and Wei Luo
Appl. Sci. 2021, 11(21), 10240; https://doi.org/10.3390/app112110240 - 1 Nov 2021
Cited by 1 | Viewed by 2022
Abstract
The use of sensor applications has been steadily increasing, leading to an urgent need for efficient data compression techniques to facilitate the storage, transmission, and processing of digital signals generated by sensors. Unlike other sequential data such as text sequences, sensor signals have [...] Read more.
The use of sensor applications has been steadily increasing, leading to an urgent need for efficient data compression techniques to facilitate the storage, transmission, and processing of digital signals generated by sensors. Unlike other sequential data such as text sequences, sensor signals have more complex statistical characteristics. Specifically, in every signal point, each bit, which corresponds to a specific precision scale, follows its own conditional distribution depending on its history and even other bits. Therefore, applying existing general-purpose data compressors usually leads to a relatively low compression ratio, since these compressors do not fully exploit such internal features. What is worse, partitioning a bit stream into groups with a preset size will sometimes break the integrity of each signal point. In this paper, we present a lossless data compressor dedicated to compressing sensor signals which is built upon a novel recurrent neural architecture named multi-channel recurrent unit (MCRU). Each channel in the proposed MCRU models a specific precision range of each signal point without breaking data integrity. During compressing and decompressing, the mirrored network will be trained on observed data; thus, no pre-training is needed. The superiority of our approach over other compressors is demonstrated experimentally on various types of sensor signals. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

21 pages, 2194 KiB  
Article
Deep Learning-Based Optimal Smart Shoes Sensor Selection for Energy Expenditure and Heart Rate Estimation
by Heesang Eom, Jongryun Roh, Yuli Sun Hariyani, Suwhan Baek, Sukho Lee, Sayup Kim and Cheolsoo Park
Sensors 2021, 21(21), 7058; https://doi.org/10.3390/s21217058 - 25 Oct 2021
Cited by 7 | Viewed by 4308
Abstract
Wearable technologies are known to improve our quality of life. Among the various wearable devices, shoes are non-intrusive, lightweight, and can be used for outdoor activities. In this study, we estimated the energy consumption and heart rate in an environment (i.e., running on [...] Read more.
Wearable technologies are known to improve our quality of life. Among the various wearable devices, shoes are non-intrusive, lightweight, and can be used for outdoor activities. In this study, we estimated the energy consumption and heart rate in an environment (i.e., running on a treadmill) using smart shoes equipped with triaxial acceleration, triaxial gyroscope, and four-point pressure sensors. The proposed model uses the latest deep learning architecture which does not require any separate preprocessing. Moreover, it is possible to select the optimal sensor using a channel-wise attention mechanism to weigh the sensors depending on their contributions to the estimation of energy expenditure (EE) and heart rate (HR). The performance of the proposed model was evaluated using the root mean squared error (RMSE), mean absolute error (MAE), and coefficient of determination (R2). Moreover, the RMSE was 1.05 ± 0.15, MAE 0.83 ± 0.12 and R2 0.922 ± 0.005 in EE estimation. On the other hand, and RMSE was 7.87 ± 1.12, MAE 6.21 ± 0.86, and R2 0.897 ± 0.017 in HR estimation. In both estimations, the most effective sensor was the z axis of the accelerometer and gyroscope sensors. Through these results, it is demonstrated that the proposed model could contribute to the improvement of the performance of both EE and HR estimations by effectively selecting the optimal sensors during the active movements of participants. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

16 pages, 14233 KiB  
Article
Biosignal-Based Driving Skill Classification Using Machine Learning: A Case Study of Maritime Navigation
by Hui Xue, Bjørn-Morten Batalden, Puneet Sharma, Jarle André Johansen and Dilip K. Prasad
Appl. Sci. 2021, 11(20), 9765; https://doi.org/10.3390/app11209765 - 19 Oct 2021
Cited by 5 | Viewed by 2309
Abstract
This work presents a novel approach to detecting stress differences between experts and novices in Situation Awareness (SA) tasks during maritime navigation using one type of wearable sensor, Empatica E4 Wristband. We propose that for a given workload state, the values of biosignal [...] Read more.
This work presents a novel approach to detecting stress differences between experts and novices in Situation Awareness (SA) tasks during maritime navigation using one type of wearable sensor, Empatica E4 Wristband. We propose that for a given workload state, the values of biosignal data collected from wearable sensor vary in experts and novices. We describe methods to conduct a designed SA task experiment, and collected the biosignal data on subjects sailing on a 240° view simulator. The biosignal data were analysed by using a machine learning algorithm, a Convolutional Neural Network. The proposed algorithm showed that the biosingal data associated with the experts can be categorized as different from that of the novices, which is in line with the results of NASA Task Load Index (NASA-TLX) rating scores. This study can contribute to the development of a self-training system in maritime navigation in further studies. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

20 pages, 6781 KiB  
Article
Super-Resolution Network with Information Distillation and Multi-Scale Attention for Medical CT Image
by Tianliu Zhao, Lei Hu, Yongmei Zhang and Jianying Fang
Sensors 2021, 21(20), 6870; https://doi.org/10.3390/s21206870 - 16 Oct 2021
Cited by 9 | Viewed by 2542
Abstract
The CT image is an important reference for clinical diagnosis. However, due to the external influence and equipment limitation in the imaging, the CT image often has problems such as blurring, a lack of detail and unclear edges, which affect the subsequent diagnosis. [...] Read more.
The CT image is an important reference for clinical diagnosis. However, due to the external influence and equipment limitation in the imaging, the CT image often has problems such as blurring, a lack of detail and unclear edges, which affect the subsequent diagnosis. In order to obtain high-quality medical CT images, we propose an information distillation and multi-scale attention network (IDMAN) for medical CT image super-resolution reconstruction. In a deep residual network, instead of only adding the convolution layer repeatedly, we introduce information distillation to make full use of the feature information. In addition, in order to better capture information and focus on more important features, we use a multi-scale attention block with multiple branches, which can automatically generate weights to adjust the network. Through these improvements, our model effectively solves the problems of insufficient feature utilization and single attention source, improves the learning ability and expression ability, and thus can reconstruct the higher quality medical CT image. We conduct a series of experiments; the results show that our method outperforms the previous algorithms and has a better performance of medical CT image reconstruction in the objective evaluation and visual effect. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

18 pages, 958 KiB  
Article
Filter Pruning via Measuring Feature Map Information
by Linsong Shao, Haorui Zuo, Jianlin Zhang, Zhiyong Xu, Jinzhen Yao, Zhixing Wang and Hong Li
Sensors 2021, 21(19), 6601; https://doi.org/10.3390/s21196601 - 2 Oct 2021
Cited by 12 | Viewed by 3694
Abstract
Neural network pruning, an important method to reduce the computational complexity of deep models, can be well applied to devices with limited resources. However, most current methods focus on some kind of information about the filter itself to prune the network, rarely exploring [...] Read more.
Neural network pruning, an important method to reduce the computational complexity of deep models, can be well applied to devices with limited resources. However, most current methods focus on some kind of information about the filter itself to prune the network, rarely exploring the relationship between the feature maps and the filters. In this paper, two novel pruning methods are proposed. First, a new pruning method is proposed, which reflects the importance of filters by exploring the information in the feature maps. Based on the premise that the more information there is, more important the feature map is, the information entropy of feature maps is used to measure information, which is used to evaluate the importance of each filter in the current layer. Further, normalization is used to realize cross layer comparison. As a result, based on the method mentioned above, the network structure is efficiently pruned while its performance is well reserved. Second, we proposed a parallel pruning method using the combination of our pruning method above and slimming pruning method which has better results in terms of computational cost. Our methods perform better in terms of accuracy, parameters, and FLOPs compared to most advanced methods. On ImageNet, it is achieved 72.02% top1 accuracy for ResNet50 with merely 11.41M parameters and 1.12B FLOPs.For DenseNet40, it is obtained 94.04% accuracy with only 0.38M parameters and 110.72M FLOPs on CIFAR10, and our parallel pruning method makes the parameters and FLOPs are just 0.37M and 100.12M, respectively, with little loss of accuracy. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

16 pages, 22156 KiB  
Article
FGFF Descriptor and Modified Hu Moment-Based Hand Gesture Recognition
by Beiwei Zhang, Yudong Zhang, Jinliang Liu and Bin Wang
Sensors 2021, 21(19), 6525; https://doi.org/10.3390/s21196525 - 29 Sep 2021
Cited by 4 | Viewed by 4718
Abstract
Gesture recognition has been studied for decades and still remains an open problem. One important reason is that the features representing those gestures are not sufficient, which may lead to poor performance and weak robustness. Therefore, this work aims at a comprehensive and [...] Read more.
Gesture recognition has been studied for decades and still remains an open problem. One important reason is that the features representing those gestures are not sufficient, which may lead to poor performance and weak robustness. Therefore, this work aims at a comprehensive and discriminative feature for hand gesture recognition. Here, a distinctive Fingertip Gradient orientation with Finger Fourier (FGFF) descriptor and modified Hu moments are suggested on the platform of a Kinect sensor. Firstly, two algorithms are designed to extract the fingertip-emphasized features, including palm center, fingertips, and their gradient orientations, followed by the finger-emphasized Fourier descriptor to construct the FGFF descriptors. Then, the modified Hu moment invariants with much lower exponents are discussed to encode contour-emphasized structure in the hand region. Finally, a weighted AdaBoost classifier is built based on finger-earth mover’s distance and SVM models to realize the hand gesture recognition. Extensive experiments on a ten-gesture dataset were carried out and compared the proposed algorithm with three benchmark methods to validate its performance. Encouraging results were obtained considering recognition accuracy and efficiency. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

14 pages, 2197 KiB  
Article
Estimation of Various Walking Intensities Based on Wearable Plantar Pressure Sensors Using Artificial Neural Networks
by Hsing-Chung Chen, Sunardi, Ben-Yi Liau, Chih-Yang Lin, Veit Babak Hamun Akbari, Chi-Wen Lung and Yih-Kuen Jan
Sensors 2021, 21(19), 6513; https://doi.org/10.3390/s21196513 - 29 Sep 2021
Cited by 15 | Viewed by 4027
Abstract
Walking has been demonstrated to improve health in people with diabetes and peripheral arterial disease. However, continuous walking can produce repeated stress on the plantar foot and cause a high risk of foot ulcers. In addition, a higher walking intensity (i.e., including different [...] Read more.
Walking has been demonstrated to improve health in people with diabetes and peripheral arterial disease. However, continuous walking can produce repeated stress on the plantar foot and cause a high risk of foot ulcers. In addition, a higher walking intensity (i.e., including different speeds and durations) will increase the risk. Therefore, quantifying the walking intensity is essential for rehabilitation interventions to indicate suitable walking exercise. This study proposed a machine learning model to classify the walking speed and duration using plantar region pressure images. A wearable plantar pressure measurement system was used to measure plantar pressures during walking. An Artificial Neural Network (ANN) was adopted to develop a model for walking intensity classification using different plantar region pressure images, including the first toe (T1), the first metatarsal head (M1), the second metatarsal head (M2), and the heel (HL). The classification consisted of three walking speeds (i.e., slow at 0.8 m/s, moderate at 1.6 m/s, and fast at 2.4 m/s) and two walking durations (i.e., 10 min and 20 min). Of the 12 participants, 10 participants (720 images) were randomly selected to train the classification model, and 2 participants (144 images) were utilized to evaluate the model performance. Experimental evaluation indicated that the ANN model effectively classified different walking speeds and durations based on the plantar region pressure images. Each plantar region pressure image (i.e., T1, M1, M2, and HL) generates different accuracies of the classification model. Higher performance was achieved when classifying walking speeds (0.8 m/s, 1.6 m/s, and 2.4 m/s) and 10 min walking duration in the T1 region, evidenced by an F1-score of 0.94. The dataset T1 could be an essential variable in machine learning to classify the walking intensity at different speeds and durations. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

16 pages, 17280 KiB  
Article
A Class-Imbalanced Deep Learning Fall Detection Algorithm Using Wearable Sensors
by Jing Zhang, Jia Li and Weibing Wang
Sensors 2021, 21(19), 6511; https://doi.org/10.3390/s21196511 - 29 Sep 2021
Cited by 14 | Viewed by 4430
Abstract
Falling represents one of the most serious health risks for elderly people; it may cause irreversible injuries if the individual cannot obtain timely treatment after the fall happens. Therefore, timely and accurate fall detection algorithm research is extremely important. Recently, a number of [...] Read more.
Falling represents one of the most serious health risks for elderly people; it may cause irreversible injuries if the individual cannot obtain timely treatment after the fall happens. Therefore, timely and accurate fall detection algorithm research is extremely important. Recently, a number of researchers have focused on fall detection and made many achievements, and most of the relevant algorithm studies are based on ideal class-balanced datasets. However, in real-life applications, the possibilities of Activities of Daily Life (ADL) and fall events are different, so the data collected by wearable sensors suffers from class imbalance. The previously developed algorithms perform poorly on class-imbalanced data. In order to solve this problem, this paper proposes an algorithm that can effectively distinguish falls from a large amount of ADL signals. Compared with the state-of-the-art fall detection algorithms, the proposed method can achieve the highest score in multiple evaluation methods, with a sensitivity of 99.33%, a specificity of 91.86%, an F-Score of 98.44% and an AUC of 98.35%. The results prove that the proposed algorithm is effective on class-imbalanced data and more suitable for real-life application compared to previous works. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

14 pages, 4228 KiB  
Article
Estimation of Mechanical Power Output Employing Deep Learning on Inertial Measurement Data in Roller Ski Skating
by Md Zia Uddin, Trine M. Seeberg, Jan Kocbach, Anders E. Liverud, Victor Gonzalez, Øyvind Sandbakk and Frédéric Meyer
Sensors 2021, 21(19), 6500; https://doi.org/10.3390/s21196500 - 29 Sep 2021
Cited by 6 | Viewed by 3268
Abstract
The ability to optimize power generation in sports is imperative, both for understanding and balancing training load correctly, and for optimizing competition performance. In this paper, we aim to estimate mechanical power output by employing a time-sequential information-based deep Long Short-Term Memory (LSTM) [...] Read more.
The ability to optimize power generation in sports is imperative, both for understanding and balancing training load correctly, and for optimizing competition performance. In this paper, we aim to estimate mechanical power output by employing a time-sequential information-based deep Long Short-Term Memory (LSTM) neural network from multiple inertial measurement units (IMUs). Thirteen athletes conducted roller ski skating trials on a treadmill with varying incline and speed. The acceleration and gyroscope data collected with the IMUs were run through statistical feature processing, before being used by the deep learning model to estimate power output. The model was thereafter used for prediction of power from test data using two approaches. First, a user-dependent case was explored, reaching a power estimation within 3.5% error. Second, a user-independent case was developed, reaching an error of 11.6% for the power estimation. Finally, the LSTM model was compared to two other machine learning models and was found to be superior. In conclusion, the user-dependent model allows for precise estimation of roller skiing power output after training the model on data from each athlete. The user-independent model provides less accurate estimation; however, the accuracy may be sufficient for providing valuable information for recreational skiers. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

16 pages, 3180 KiB  
Article
Recovery of Ionospheric Signals Using Fully Convolutional DenseNet and Its Challenges
by Merlin M. Mendoza, Yu-Chi Chang, Alexei V. Dmitriev, Chia-Hsien Lin, Lung-Chih Tsai, Yung-Hui Li, Mon-Chai Hsieh, Hao-Wei Hsu, Guan-Han Huang, Yu-Ciang Lin and Enkhtuya Tsogtbaatar
Sensors 2021, 21(19), 6482; https://doi.org/10.3390/s21196482 - 28 Sep 2021
Cited by 4 | Viewed by 2707
Abstract
The technique of active ionospheric sounding by ionosondes requires sophisticated methods for the recovery of experimental data on ionograms. In this work, we applied an advanced algorithm of deep learning for the identification and classification of signals from different ionospheric layers. We collected [...] Read more.
The technique of active ionospheric sounding by ionosondes requires sophisticated methods for the recovery of experimental data on ionograms. In this work, we applied an advanced algorithm of deep learning for the identification and classification of signals from different ionospheric layers. We collected a dataset of 6131 manually labeled ionograms acquired from low-latitude ionosondes in Taiwan. In the ionograms, we distinguished 11 different classes of the signals according to their ionospheric layers. We developed an artificial neural network, FC-DenseNet24, based on the FC-DenseNet convolutional neural network. We also developed a double-filtering algorithm to reduce incorrectly classified signals. That made it possible to successfully recover the sporadic E layer and the F2 layer from highly noise-contaminated ionograms whose mean signal-to-noise ratio was low, SNR = 1.43. The Intersection over Union (IoU) of the recovery of these two signal classes was greater than 0.6, which was higher than the previous models reported. We also identified three factors that can lower the recovery accuracy: (1) smaller statistics of samples; (2) mixing and overlapping of different signals; (3) the compact shape of signals. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

15 pages, 2686 KiB  
Article
Data-Driven Investigation of Gait Patterns in Individuals Affected by Normal Pressure Hydrocephalus
by Kiran Kuruvithadam, Marcel Menner, William R. Taylor, Melanie N. Zeilinger, Lennart Stieglitz and Marianne Schmid Daners
Sensors 2021, 21(19), 6451; https://doi.org/10.3390/s21196451 - 27 Sep 2021
Cited by 7 | Viewed by 2996
Abstract
Normal pressure hydrocephalus (NPH) is a chronic and progressive disease that affects predominantly elderly subjects. The most prevalent symptoms are gait disorders, generally determined by visual observation or measurements taken in complex laboratory environments. However, controlled testing environments can have a significant influence [...] Read more.
Normal pressure hydrocephalus (NPH) is a chronic and progressive disease that affects predominantly elderly subjects. The most prevalent symptoms are gait disorders, generally determined by visual observation or measurements taken in complex laboratory environments. However, controlled testing environments can have a significant influence on the way subjects walk and hinder the identification of natural walking characteristics. The study aimed to investigate the differences in walking patterns between a controlled environment (10 m walking test) and real-world environment (72 h recording) based on measurements taken via a wearable gait assessment device. We tested whether real-world environment measurements can be beneficial for the identification of gait disorders by performing a comparison of patients’ gait parameters with an aged-matched control group in both environments. Subsequently, we implemented four machine learning classifiers to inspect the individual strides’ profiles. Our results on twenty young subjects, twenty elderly subjects and twelve NPH patients indicate that patients exhibited a considerable difference between the two environments, in particular gait speed (p-value p=0.0073), stride length (p-value p=0.0073), foot clearance (p-value p=0.0117) and swing/stance ratio (p-value p=0.0098). Importantly, measurements taken in real-world environments yield a better discrimination of NPH patients compared to the controlled setting. Finally, the use of stride classifiers provides promise in the identification of strides affected by motion disorders. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

14 pages, 14350 KiB  
Article
An Embeddable Algorithm for Automatic Garbage Detection Based on Complex Marine Environment
by Hongjie Deng, Daji Ergu, Fangyao Liu, Bo Ma and Ying Cai
Sensors 2021, 21(19), 6391; https://doi.org/10.3390/s21196391 - 24 Sep 2021
Cited by 27 | Viewed by 3748
Abstract
With the continuous development of artificial intelligence, embedding object detection algorithms into autonomous underwater detectors for marine garbage cleanup has become an emerging application area. Considering the complexity of the marine environment and the low resolution of the images taken by underwater detectors, [...] Read more.
With the continuous development of artificial intelligence, embedding object detection algorithms into autonomous underwater detectors for marine garbage cleanup has become an emerging application area. Considering the complexity of the marine environment and the low resolution of the images taken by underwater detectors, this paper proposes an improved algorithm based on Mask R-CNN, with the aim of achieving high accuracy marine garbage detection and instance segmentation. First, the idea of dilated convolution is introduced in the Feature Pyramid Network to enhance feature extraction ability for small objects. Secondly, the spatial-channel attention mechanism is used to make features learn adaptively. It can effectively focus attention on detection objects. Third, the re-scoring branch is added to improve the accuracy of instance segmentation by scoring the predicted masks based on the method of Generalized Intersection over Union. Finally, we train the proposed algorithm in this paper on the Transcan dataset, evaluating its effectiveness by various metrics and comparing it with existing algorithms. The experimental results show that compared to the baseline provided by the Transcan dataset, the algorithm in this paper improves the mAP indexes on the two tasks of garbage detection and instance segmentation by 9.6 and 5.0, respectively, which significantly improves the algorithm performance. Thus, it can be better applied in the marine environment and achieve high precision object detection and instance segmentation. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

12 pages, 2178 KiB  
Article
The Seismo-Performer: A Novel Machine Learning Approach for General and Efficient Seismic Phase Recognition from Local Earthquakes in Real Time
by Andrey Stepnov, Vladimir Chernykh and Alexey Konovalov
Sensors 2021, 21(18), 6290; https://doi.org/10.3390/s21186290 - 19 Sep 2021
Cited by 6 | Viewed by 4024
Abstract
When recording seismic ground motion in multiple sites using independent recording stations one needs to recognize the presence of the same parts of seismic waves arriving at these stations. This problem is known in seismology as seismic phase picking. It is challenging to [...] Read more.
When recording seismic ground motion in multiple sites using independent recording stations one needs to recognize the presence of the same parts of seismic waves arriving at these stations. This problem is known in seismology as seismic phase picking. It is challenging to automate the accurate picking of seismic phases to the level of human capabilities. By solving this problem, it would be possible to automate routine processing in real time on any local network. A new machine learning approach was developed to classify seismic phases from local earthquakes. The resulting model is based on spectrograms and utilizes the transformer architecture with a self-attention mechanism and without any convolution blocks. The model is general for various local networks and has only 57 k learning parameters. To assess the generalization property, two new datasets were developed, containing local earthquake data collected from two different regions using a wide variety of seismic instruments. The data were not involved in the training process for any model to estimate the generalization property. The new model exhibits the best classification and computation performance results on its pre-trained weights compared with baseline models from related work. The model code is available online and is ready for day-to-day real-time processing on conventional seismic equipment without graphics processing units. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

15 pages, 4082 KiB  
Article
Human Activity Classification Using Multilayer Perceptron
by Ojan Majidzadeh Gorjani, Radek Byrtus, Jakub Dohnal, Petr Bilik, Jiri Koziorek and Radek Martinek
Sensors 2021, 21(18), 6207; https://doi.org/10.3390/s21186207 - 16 Sep 2021
Cited by 12 | Viewed by 3131
Abstract
The number of smart homes is rapidly increasing. Smart homes typically feature functions such as voice-activated functions, automation, monitoring, and tracking events. Besides comfort and convenience, the integration of smart home functionality with data processing methods can provide valuable information about the well-being [...] Read more.
The number of smart homes is rapidly increasing. Smart homes typically feature functions such as voice-activated functions, automation, monitoring, and tracking events. Besides comfort and convenience, the integration of smart home functionality with data processing methods can provide valuable information about the well-being of the smart home residence. This study is aimed at taking the data analysis within smart homes beyond occupancy monitoring and fall detection. This work uses a multilayer perceptron neural network to recognize multiple human activities from wrist- and ankle-worn devices. The developed models show very high recognition accuracy across all activity classes. The cross-validation results indicate accuracy levels above 98% across all models, and scoring evaluation methods only resulted in an average accuracy reduction of 10%. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

17 pages, 13235 KiB  
Article
Photo-Realistic Image Dehazing and Verifying Networks via Complementary Adversarial Learning
by Joongchol Shin and Joonki Paik
Sensors 2021, 21(18), 6182; https://doi.org/10.3390/s21186182 - 15 Sep 2021
Cited by 4 | Viewed by 2812
Abstract
Physical model-based dehazing methods cannot, in general, avoid environmental variables and undesired artifacts such as non-collected illuminance, halo and saturation since it is difficult to accurately estimate the amount of the illuminance, light transmission and airlight. Furthermore, the haze model estimation process requires [...] Read more.
Physical model-based dehazing methods cannot, in general, avoid environmental variables and undesired artifacts such as non-collected illuminance, halo and saturation since it is difficult to accurately estimate the amount of the illuminance, light transmission and airlight. Furthermore, the haze model estimation process requires very high computational complexity. To solve this problem by directly estimating the radiance of the haze images, we present a novel dehazing and verifying network (DVNet). In the dehazing procedure, we enhanced the clean images by using a correction network (CNet), which uses the ground truth to learn the haze network. Haze images are then restored through a haze network (HNet). Furthermore, a verifying method verifies the error of both CNet and HNet using a self-supervised learning method. Finally, the proposed complementary adversarial learning method can produce results more naturally. Note that the proposed discriminator and generators (HNet & CNet) can be learned via an unpaired dataset. Overall, the proposed DVNet can generate a better dehazed result than state-of-the-art approaches under various hazy conditions. Experimental results show that the DVNet outperforms state-of-the-art dehazing methods in most cases. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

15 pages, 3189 KiB  
Article
Automatic Inside Point Localization with Deep Reinforcement Learning for Interactive Object Segmentation
by Guoqing Li, Guoping Zhang and Chanchan Qin
Sensors 2021, 21(18), 6100; https://doi.org/10.3390/s21186100 - 11 Sep 2021
Cited by 1 | Viewed by 2317
Abstract
In the task of interactive image segmentation, the Inside-Outside Guidance (IOG) algorithm has demonstrated superior segmentation performance leveraging Inside-Outside Guidance information. Nevertheless, we observe that the inconsistent input between training and testing when selecting the inside point will result in significant performance degradation. [...] Read more.
In the task of interactive image segmentation, the Inside-Outside Guidance (IOG) algorithm has demonstrated superior segmentation performance leveraging Inside-Outside Guidance information. Nevertheless, we observe that the inconsistent input between training and testing when selecting the inside point will result in significant performance degradation. In this paper, a deep reinforcement learning framework, named Inside Point Localization Network (IPL-Net), is proposed to infer the suitable position for the inside point to help the IOG algorithm. Concretely, when a user first clicks two outside points at the symmetrical corner locations of the target object, our proposed system automatically generates the sequence of movement to localize the inside point. We then perform the IOG interactive segmentation method for precisely segmenting the target object of interest. The inside point localization problem is difficult to define as a supervised learning framework because it is expensive to collect image and their corresponding inside points. Therefore, we formulate this problem as Markov Decision Process (MDP) and then optimize it with Dueling Double Deep Q-Network (D3QN). We train our network on the PASCAL dataset and demonstrate that the network achieves excellent performance. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

16 pages, 4124 KiB  
Article
A Sensor Fused Rear Cross Traffic Detection System Using Transfer Learning
by Jungme Park and Wenchang Yu
Sensors 2021, 21(18), 6055; https://doi.org/10.3390/s21186055 - 9 Sep 2021
Cited by 4 | Viewed by 3061
Abstract
Recent emerging automotive sensors and innovative technologies in Advanced Driver Assistance Systems (ADAS) increase the safety of driving a vehicle on the road. ADAS enhance road safety by providing early warning signals for drivers and controlling a vehicle accordingly to mitigate a collision. [...] Read more.
Recent emerging automotive sensors and innovative technologies in Advanced Driver Assistance Systems (ADAS) increase the safety of driving a vehicle on the road. ADAS enhance road safety by providing early warning signals for drivers and controlling a vehicle accordingly to mitigate a collision. A Rear Cross Traffic (RCT) detection system is an important application of ADAS. Rear-end crashes are a frequently occurring type of collision, and approximately 29.7% of all crashes are rear-ended collisions. The RCT detection system detects obstacles at the rear while the car is backing up. In this paper, a robust sensor fused RCT detection system is proposed. By combining the information from two radars and a wide-angle camera, the locations of the target objects are identified using the proposed sensor fused algorithm. Then, the transferred Convolution Neural Network (CNN) model is used to classify the object type. The experiments show that the proposed sensor fused RCT detection system reduced the processing time 15.34 times faster than the camera-only system. The proposed system has achieved 96.42% accuracy. The experimental results demonstrate that the proposed sensor fused system has robust object detection accuracy and fast processing time, which is vital for deploying the ADAS system. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

26 pages, 11569 KiB  
Article
Tactile Object Recognition for Humanoid Robots Using New Designed Piezoresistive Tactile Sensor and DCNN
by Somchai Pohtongkam and Jakkree Srinonchat
Sensors 2021, 21(18), 6024; https://doi.org/10.3390/s21186024 - 8 Sep 2021
Cited by 33 | Viewed by 6428
Abstract
A tactile sensor array is a crucial component for applying physical sensors to a humanoid robot. This work focused on developing a palm-size tactile sensor array (56.0 mm × 56.0 mm) to apply object recognition for the humanoid robot hand. This sensor was [...] Read more.
A tactile sensor array is a crucial component for applying physical sensors to a humanoid robot. This work focused on developing a palm-size tactile sensor array (56.0 mm × 56.0 mm) to apply object recognition for the humanoid robot hand. This sensor was based on a PCB technology operating with the piezoresistive principle. A conductive polymer composites sheet was used as a sensing element and the matrix array of this sensor was 16 × 16 pixels. The sensitivity of this sensor was evaluated and the sensor was installed on the robot hand. The tactile images, with resolution enhancement using bicubic interpolation obtained from 20 classes, were used to train and test 19 different DCNNs. InceptionResNetV2 provided superior performance with 91.82% accuracy. However, using the multimodal learning method that included InceptionResNetV2 and XceptionNet, the highest recognition rate of 92.73% was achieved. Moreover, this recognition rate improved when the object exploration was applied to demonstrate. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Back to TopTop