sensors-logo

Journal Browser

Journal Browser

Smart Sensors and Devices in Artificial Intelligence

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (30 June 2020) | Viewed by 104827

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mechanical Engineering, Lassonde School of Engineering, York University, Toronto, ON M3J 1P3, Canada
Interests: robotics and mechatronics; high-performance parallel robotic machine development; sustainable/green manufacturing systems; micro/nanomanipulation and MEMS devices (sensors); micro mobile robots and control of multi-robot cooperation; intelligent servo control system for the MEMS-based high-performance micro-robot; web-based remote manipulation; rehabilitation robot and rescue robot
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute on Mechatronics, Xidian University, 710071, No.2 Taibai Rd, Xi’an, China
Interests: parallel robots; mechatronics; intelligent control; design optimization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Sensors are eyes or/and ears of an intelligent system, such as UAV, AGV and robots. With the development of material, signal processing and multidisciplinary interactions, more and more smart sensors are proposed and fabricated under increasing demands for homes, industry and military fields. Networks of sensors will be able to enhance the ability to obtain huge amounts of information (big data) and improve precision, which also mirrors the developmental tendency of modern sensors. Moreover, artificial intelligence is a novel impetus for sensors and networks, which gets sensors to learn and think and feed more efficient results back.

This Special Issue welcomes new research results from academia and industry, on the subject of “Smart Sensors and Networks”, especially sensing technologies utilizing Artificial Intelligence. The Special Issue topics include, but are not limited to:

  • smart sensors
  • biosensors
  • sensor network
  • sensor data fusion
  • artificial intelligence
  • deep learning
  • mechatronics devices for sensors
  • applications of sensors for robotics and mechatronics devices

The Special Issue also welcome excellent extended papers invited from the 2018 2nd International Conference on Artificial Intelligence Applications and Technologies (AIAAT 2018) and 2019 3rd International Conference on Artificial Intelligence Applications and Technologies (AIAAT 2019).

Prof. Dr. Dan Zhang
Prof. Dr. Xuechao Duan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • smart sensors
  • biosensor
  • sensor network
  • sensor data fusion
  • artificial intelligence
  • deep learning
  • robotics
  • mechatronics devices

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

4 pages, 146 KiB  
Editorial
Smart Sensors and Devices in Artificial Intelligence
by Dan Zhang and Bin Wei
Sensors 2020, 20(20), 5945; https://doi.org/10.3390/s20205945 - 21 Oct 2020
Cited by 9 | Viewed by 3385
Abstract
As stated in the Special Issue call, “sensors are eyes or/and ears of an intelligent system, such as Unmanned Aerial Vehicle (UAV), Automated Guided Vehicle (AGV) and robots [...] Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)

Research

Jump to: Editorial

24 pages, 5312 KiB  
Article
Time-Aware and Temperature-Aware Fire Evacuation Path Algorithm in IoT-Enabled Multi-Story Multi-Exit Buildings
by Hong-Hsu Yen, Cheng-Han Lin and Hung-Wei Tsao
Sensors 2021, 21(1), 111; https://doi.org/10.3390/s21010111 - 26 Dec 2020
Cited by 9 | Viewed by 3345
Abstract
Temperature sensors with a communication capability can help monitor and report temperature values to a control station, which enables dynamic and real-time evacuation paths in fire emergencies. As compared to traditional approaches that identify a one-shot fire evacuation path, in this paper, we [...] Read more.
Temperature sensors with a communication capability can help monitor and report temperature values to a control station, which enables dynamic and real-time evacuation paths in fire emergencies. As compared to traditional approaches that identify a one-shot fire evacuation path, in this paper, we develop an intelligent algorithm that can identify time-aware and temperature-aware fire evacuation paths by considering temperature changes at different time slots in multi-story and multi-exit buildings. We first propose a method that can map three-dimensional multi-story multi-exit buildings into a two-dimensional graph. Then, a mathematical optimization model is proposed to capture this time-aware and temperature-aware evacuation path problem in multi-story multi-exit buildings. Six fire evacuation algorithms (BFS, SP, DBFS, TABFS, TASP and TADBFS) are proposed to identify the efficient evacuation path. The first three algorithms that do not address human temperature limit constraints can be used by rescue robots or firemen with fire-proof suits. The last three algorithms that address human temperature limit constraints can be used by evacuees in terms of total time slots and total temperature on the evacuation path. In the computational experiments, the open space building and the Taipei 101 Shopping Mall are all tested to verify the solution quality of these six algorithms. From the computational results, TABFS, TASP and TADBF identify almost the same evacuation path in open space building and the Taipei 101 Shopping Mall. BFS, SP DBFS can locate marginally better results in terms of evacuation time and total temperature on the evacuation path. When considering evacuating a group of evacuees, the computational time of the evacuation algorithm is very important in a time-limited evacuation process. Considering the extreme case of seven fires in eight emergency exits in the Taipei 101 Shopping Mall, the golden window for evacuation is 15 time slots. Only TABFS and TADBFS are applicable to evacuate 1200 people in the Taipei 101 Shopping Mall when one time slot is setting as one minute. The computational results show that the capacity limit for the Taipei 101 Shopping Mall is 800 people in the extreme case of seven fires. In this case, when the number of people in the building is less than 700, TADBFS should be adopted. When the number of people in the building is greater than 700, TABFS can evacuate more people than TADBFS. Besides identifying an efficient evacuation path, another significant contribution of this paper is to identify the best sensor density deployment at large buildings like the Taipei 101 Shopping Mall in considering the fire evacuation. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

25 pages, 5731 KiB  
Article
TADILOF: Time Aware Density-Based Incremental Local Outlier Detection in Data Streams
by Jen-Wei Huang, Meng-Xun Zhong and Bijay Prasad Jaysawal
Sensors 2020, 20(20), 5829; https://doi.org/10.3390/s20205829 - 15 Oct 2020
Cited by 16 | Viewed by 3628
Abstract
Outlier detection in data streams is crucial to successful data mining. However, this task is made increasingly difficult by the enormous growth in the quantity of data generated by the expansion of Internet of Things (IoT). Recent advances in outlier detection based on [...] Read more.
Outlier detection in data streams is crucial to successful data mining. However, this task is made increasingly difficult by the enormous growth in the quantity of data generated by the expansion of Internet of Things (IoT). Recent advances in outlier detection based on the density-based local outlier factor (LOF) algorithms do not consider variations in data that change over time. For example, there may appear a new cluster of data points over time in the data stream. Therefore, we present a novel algorithm for streaming data, referred to as time-aware density-based incremental local outlier detection (TADILOF) to overcome this issue. In addition, we have developed a means for estimating the LOF score, termed "approximate LOF," based on historical information following the removal of outdated data. The results of experiments demonstrate that TADILOF outperforms current state-of-the-art methods in terms of AUC while achieving similar performance in terms of execution time. Moreover, we present an application of the proposed scheme to the development of an air-quality monitoring system. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

17 pages, 2543 KiB  
Article
Vehicle Classification Based on FBG Sensor Arrays Using Neural Networks
by Michal Frniak, Miroslav Markovic, Patrik Kamencay, Jozef Dubovan, Miroslav Benco and Milan Dado
Sensors 2020, 20(16), 4472; https://doi.org/10.3390/s20164472 - 10 Aug 2020
Cited by 11 | Viewed by 3709
Abstract
This article is focused on the automatic classification of passing vehicles through an experimental platform using optical sensor arrays. The amount of data generated from various sensor systems is growing proportionally every year. Therefore, it is necessary to look for more progressive solutions [...] Read more.
This article is focused on the automatic classification of passing vehicles through an experimental platform using optical sensor arrays. The amount of data generated from various sensor systems is growing proportionally every year. Therefore, it is necessary to look for more progressive solutions to these problems. Methods of implementing artificial intelligence are becoming a new trend in this area. At first, an experimental platform with two separate groups of fiber Bragg grating sensor arrays (horizontally and vertically oriented) installed into the top pavement layers was created. Interrogators were connected to sensor arrays to measure pavement deformation caused by vehicles passing over the pavement. Next, neural networks for visual classification with a closed-circuit television camera to separate vehicles into different classes were used. This classification was used for the verification of measured and analyzed data from sensor arrays. The newly proposed neural network for vehicle classification from the sensor array dataset was created. From the obtained experimental results, it is evident that our proposed neural network was capable of separating trucks from other vehicles, with an accuracy of 94.9%, and classifying vehicles into three different classes, with an accuracy of 70.8%. Based on the experimental results, extending sensor arrays as described in the last part of the paper is recommended. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

21 pages, 6518 KiB  
Article
Control System for Vertical Take-Off and Landing Vehicle’s Adaptive Landing Based on Multi-Sensor Data Fusion
by Hongyan Tang, Dan Zhang and Zhongxue Gan
Sensors 2020, 20(16), 4411; https://doi.org/10.3390/s20164411 - 7 Aug 2020
Cited by 21 | Viewed by 4275
Abstract
Vertical take-off and landing unmanned aerial vehicles (VTOL UAV) are widely used in various fields because of their stable flight, easy operation, and low requirements for take-off and landing environments. To further expand the UAV’s take-off and landing environment to include a non-structural [...] Read more.
Vertical take-off and landing unmanned aerial vehicles (VTOL UAV) are widely used in various fields because of their stable flight, easy operation, and low requirements for take-off and landing environments. To further expand the UAV’s take-off and landing environment to include a non-structural complex environment, this study developed a landing gear robot for VTOL vehicles. This article mainly introduces the adaptive landing control of the landing gear robot in an unstructured environment. Based on the depth camera (TOF camera), IMU, and optical flow sensor, the control system achieves multi-sensor data fusion and uses a robotic kinematical model to achieve adaptive landing. Finally, this study verifies the feasibility and effectiveness of adaptive landing through experiments. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

19 pages, 3728 KiB  
Article
Unidimensional ACGAN Applied to Link Establishment Behaviors Recognition of a Short-Wave Radio Station
by Zilong Wu, Hong Chen and Yingke Lei
Sensors 2020, 20(15), 4270; https://doi.org/10.3390/s20154270 - 31 Jul 2020
Cited by 10 | Viewed by 3048
Abstract
It is difficult to obtain many labeled Link Establishment (LE) behavior signals sent by non-cooperative short-wave radio stations. We propose a novel unidimensional Auxiliary Classifier Generative Adversarial Network (ACGAN) to get more signals and then use unidimensional DenseNet to recognize LE behaviors. Firstly, [...] Read more.
It is difficult to obtain many labeled Link Establishment (LE) behavior signals sent by non-cooperative short-wave radio stations. We propose a novel unidimensional Auxiliary Classifier Generative Adversarial Network (ACGAN) to get more signals and then use unidimensional DenseNet to recognize LE behaviors. Firstly, a few real samples were randomly selected from many real signals as the training set of unidimensional ACGAN. Then, the new training set was formed by combining real samples with fake samples generated by the trained ACGAN. In addition, the unidimensional convolutional auto-coder was proposed to describe the reliability of these generated samples. Finally, different LE behaviors could be recognized without the communication protocol standard by using the new training set to train unidimensional DenseNet. Experimental results revealed that unidimensional ACGAN effectively augmented the training set, thus improving the performance of recognition algorithm. When the number of original training samples was 400, 700, 1000, or 1300, the recognition accuracy of unidimensional ACGAN+DenseNet was 1.92, 6.16, 4.63, and 3.06% higher, respectively, than that of unidimensional DenseNet. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

17 pages, 4771 KiB  
Article
Xbee-Based WSN Architecture for Monitoring of Banana Ripening Process Using Knowledge-Level Artificial Intelligent Technique
by Saud Altaf, Shafiq Ahmad, Mazen Zaindin and Muhammad Waseem Soomro
Sensors 2020, 20(14), 4033; https://doi.org/10.3390/s20144033 - 20 Jul 2020
Cited by 17 | Viewed by 8186
Abstract
Real-time monitoring of fruit ripeness in storage and during logistics allows traders to minimize the chances of financial losses and maximize the quality of the fruit during storage through accurate prediction of the present condition of fruits. In Pakistan, banana production faces different [...] Read more.
Real-time monitoring of fruit ripeness in storage and during logistics allows traders to minimize the chances of financial losses and maximize the quality of the fruit during storage through accurate prediction of the present condition of fruits. In Pakistan, banana production faces different difficulties from production, post-harvest management, and trade marketing due to atmosphere and mismanagement in storage containers. In recent research development, Wireless Sensor Networks (WSNs) are progressively under investigation in the field of fruit ripening due to their remote monitoring capability. Focused on fruit ripening monitoring, this paper demonstrates an Xbee-based wireless sensor nodes network. The role of the network architecture of the Xbee sensor node and sink end-node is discussed in detail regarding their ability to monitor the condition of all the required diagnosis parameters and stages of banana ripening. Furthermore, different features are extracted using the gas sensor, which is based on diverse values. These features are utilized for training in the Artificial Neural Network (ANN) through the Back Propagation (BP) algorithm for further data validation. The experimental results demonstrate that the projected WSN architecture can identify the banana condition in the storage area. The proposed Neural Network (NN) architectural design works well with selecting the feature data sets. It seems that the experimental and simulation outcomes and accuracy in banana ripening condition monitoring in the given feature vectors is attained and acceptable, through the classification performance, to make a better decision for effective monitoring of current fruit condition. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

21 pages, 2319 KiB  
Article
Improving Road Traffic Forecasting Using Air Pollution and Atmospheric Data: Experiments Based on LSTM Recurrent Neural Networks
by Faraz Malik Awan, Roberto Minerva and Noel Crespi
Sensors 2020, 20(13), 3749; https://doi.org/10.3390/s20133749 - 4 Jul 2020
Cited by 33 | Viewed by 5612
Abstract
Traffic flow forecasting is one of the most important use cases related to smart cities. In addition to assisting traffic management authorities, traffic forecasting can help drivers to choose the best path to their destinations. Accurate traffic forecasting is a basic requirement for [...] Read more.
Traffic flow forecasting is one of the most important use cases related to smart cities. In addition to assisting traffic management authorities, traffic forecasting can help drivers to choose the best path to their destinations. Accurate traffic forecasting is a basic requirement for traffic management. We propose a traffic forecasting approach that utilizes air pollution and atmospheric parameters. Air pollution levels are often associated with traffic intensity, and much work is already available in which air pollution has been predicted using road traffic. However, to the best of our knowledge, an attempt to improve forecasting road traffic using air pollution and atmospheric parameters is not yet available in the literature. In our preliminary experiments, we found out the relation between traffic intensity, air pollution, and atmospheric parameters. Therefore, we believe that addition of air pollutants and atmospheric parameters can improve the traffic forecasting. Our method uses air pollution gases, including C O , N O , N O 2 , N O x , and O 3 . We chose these gases because they are associated with road traffic. Some atmospheric parameters, including pressure, temperature, wind direction, and wind speed have also been considered, as these parameters can play an important role in the dispersion of the above-mentioned gases. Data related to traffic flow, air pollution, and the atmosphere were collected from the open data portal of Madrid, Spain. The long short-term memory (LSTM) recurrent neural network (RNN) was used in this paper to perform traffic forecasting. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

17 pages, 8661 KiB  
Article
Vehicle Detection under Adverse Weather from Roadside LiDAR Data
by Jianqing Wu, Hao Xu, Yuan Tian, Rendong Pi and Rui Yue
Sensors 2020, 20(12), 3433; https://doi.org/10.3390/s20123433 - 17 Jun 2020
Cited by 33 | Viewed by 5026
Abstract
Roadside light detection and ranging (LiDAR) is an emerging traffic data collection device and has recently been deployed in different transportation areas. The current data processing algorithms for roadside LiDAR are usually developed assuming normal weather conditions. Adverse weather conditions, such as windy [...] Read more.
Roadside light detection and ranging (LiDAR) is an emerging traffic data collection device and has recently been deployed in different transportation areas. The current data processing algorithms for roadside LiDAR are usually developed assuming normal weather conditions. Adverse weather conditions, such as windy and snowy conditions, could be challenges for data processing. This paper examines the performance of the state-of-the-art data processing algorithms developed for roadside LiDAR under adverse weather and then composed an improved background filtering and object clustering method in order to process the roadside LiDAR data, which was proven to perform better under windy and snowy weather. The testing results showed that the accuracy of the background filtering and point clustering was greatly improved compared to the state-of-the-art methods. With this new approach, vehicles can be identified with relatively high accuracy under windy and snowy weather. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

15 pages, 3347 KiB  
Article
Real-Time Queue Length Detection with Roadside LiDAR Data
by Jianqing Wu, Hao Xu, Yongsheng Zhang, Yuan Tian and Xiuguang Song
Sensors 2020, 20(8), 2342; https://doi.org/10.3390/s20082342 - 20 Apr 2020
Cited by 10 | Viewed by 4284
Abstract
Real-time queue length information is an important input for many traffic applications. This paper presents a novel method for real-time queue length detection with roadside LiDAR data. Vehicles on the road were continuously tracked with the LiDAR data processing procedures (including background filtering, [...] Read more.
Real-time queue length information is an important input for many traffic applications. This paper presents a novel method for real-time queue length detection with roadside LiDAR data. Vehicles on the road were continuously tracked with the LiDAR data processing procedures (including background filtering, point clustering, object classification, lane identification and object association). A detailed method to identify the vehicle at the end of the queue considering the occlusion issue and package loss issue was documented in this study. The proposed method can provide real-time queue length information. The performance of the proposed queue length detection method was evaluated with the ground-truth data collected from three sites in Reno, Nevada. Results show the proposed method can achieve an average of 98% accuracy at the six investigated sites. The errors in the queue length detection were also diagnosed. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

19 pages, 7192 KiB  
Article
A Clamping Force Estimation Method Based on a Joint Torque Disturbance Observer Using PSO-BPNN for Cable-Driven Surgical Robot End-Effectors
by Zhengyu Wang, Daoming Wang, Bing Chen, Lingtao Yu, Jun Qian and Bin Zi
Sensors 2019, 19(23), 5291; https://doi.org/10.3390/s19235291 - 1 Dec 2019
Cited by 20 | Viewed by 4652
Abstract
The ability to sense external force is an important technique for force feedback, haptics and safe interaction control in minimally-invasive surgical robots (MISRs). Moreover, this ability plays a significant role in the restricting refined surgical operations. The wrist joints of surgical robot end-effectors [...] Read more.
The ability to sense external force is an important technique for force feedback, haptics and safe interaction control in minimally-invasive surgical robots (MISRs). Moreover, this ability plays a significant role in the restricting refined surgical operations. The wrist joints of surgical robot end-effectors are usually actuated by several long-distance wire cables. Its two forceps are each actuated by two cables. The scope of force sensing includes multidimensional external force and one-dimensional clamping force. This paper focuses on one-dimensional clamping force sensing method that do not require any internal force sensor integrated in the end-effector’s forceps. A new clamping force estimation method is proposed based on a joint torque disturbance observer (JTDO) for a cable-driven surgical robot end-effector. The JTDO essentially considers the variations in cable tension between the actual cable tension and the estimated cable tension using a Particle Swarm Optimization Back Propagation Neural Network (PSO-BPNN) under free motion. Furthermore, a clamping force estimator is proposed based on the forceps’ JTDO and their mechanical relations. According to comparative analyses in experimental studies, the detection resolutions of collision force and clamping force were 0.11 N. The experimental results verify the feasibility and effectiveness of the proposed clamping force sensing method. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

13 pages, 2638 KiB  
Article
Fuzzy Functional Dependencies as a Method of Choice for Fusion of AIS and OTHR Data
by Medhat Abdel Rahman Mohamed Mostafa, Miljan Vucetic, Nikola Stojkovic, Nikola Lekić and Aleksej Makarov
Sensors 2019, 19(23), 5166; https://doi.org/10.3390/s19235166 - 26 Nov 2019
Cited by 4 | Viewed by 3030
Abstract
Maritime situational awareness at over-the-horizon (OTH) distances in exclusive economic zones can be achieved by deploying networks of high-frequency OTH radars (HF-OTHR) in coastal countries along with exploiting automatic identification system (AIS) data. In some regions the reception of AIS messages can be [...] Read more.
Maritime situational awareness at over-the-horizon (OTH) distances in exclusive economic zones can be achieved by deploying networks of high-frequency OTH radars (HF-OTHR) in coastal countries along with exploiting automatic identification system (AIS) data. In some regions the reception of AIS messages can be unreliable and with high latency. This leads to difficulties in properly associating AIS data to OTHR tracks. Long history records about the previous whereabouts of vessels based on both OTHR tracks and AIS data can be maintained in order to increase the chances of fusion. If the quantity of data increases significantly, data cleaning can be done in order to minimize system requirements. This process is performed prior to fusing AIS data and observed OTHR tracks. In this paper, we use fuzzy functional dependencies (FFDs) in the context of data fusion from AIS and OTHR sources. The fuzzy logic approach has been shown to be a promising tool for handling data uncertainty from different sensors. The proposed method is experimentally evaluated for fusing AIS data and the target tracks provided by the OTHR installed in the Gulf of Guinea. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

19 pages, 4144 KiB  
Article
An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning
by Surak Son, YiNa Jeong and Byungkwan Lee
Sensors 2019, 19(22), 5035; https://doi.org/10.3390/s19225035 - 18 Nov 2019
Cited by 5 | Viewed by 4262
Abstract
When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems [...] Read more.
When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user’s speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle’s display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

24 pages, 4268 KiB  
Article
Distributed Reliable and Efficient Transmission Task Assignment for WSNs
by Xiaojuan Zhu, Kuan-Ching Li, Jinwei Zhang and Shunxiang Zhang
Sensors 2019, 19(22), 5028; https://doi.org/10.3390/s19225028 - 18 Nov 2019
Cited by 10 | Viewed by 2993
Abstract
Task assignment is a crucial problem in wireless sensor networks (WSNs) that may affect the completion quality of sensing tasks. From the perspective of global optimization, a transmission-oriented reliable and energy-efficient task allocation (TRETA) is proposed, which is based on a comprehensive multi-level [...] Read more.
Task assignment is a crucial problem in wireless sensor networks (WSNs) that may affect the completion quality of sensing tasks. From the perspective of global optimization, a transmission-oriented reliable and energy-efficient task allocation (TRETA) is proposed, which is based on a comprehensive multi-level view of the network and an evaluation model for transmission in WSNs. To deliver better fault tolerance, TRETA dynamically adjusts in event-driven mode. Aiming to solve the reliable and efficient distributed task allocation problem in WSNs, two distributed task assignments for WSNs based on TRETA are proposed. In the former, the sink assigns reliability to all cluster heads according to the reliability requirements, so the cluster head performs local task allocation according to the assigned phase target reliability constraints. Simulation results show the reduction of the communication cost and latency of task allocation compared to centralized task assignments. Like the latter, the global view is obtained by fetching local views from multiple sink nodes, as well as multiple sinks having a consistent comprehensive view for global optimization. The way to respond to local task allocation requirements without the need to communicate with remote nodes overcomes the disadvantages of centralized task allocation in large-scale sensor networks with significant communication overheads and considerable delay, and has better scalability. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

14 pages, 6000 KiB  
Article
Multiple Event-Based Simulation Scenario Generation Approach for Autonomous Vehicle Smart Sensors and Devices
by Jisun Park, Mingyun Wen, Yunsick Sung and Kyungeun Cho
Sensors 2019, 19(20), 4456; https://doi.org/10.3390/s19204456 - 14 Oct 2019
Cited by 12 | Viewed by 4032
Abstract
Nowadays, deep learning methods based on a virtual environment are widely applied to research and technology development for autonomous vehicle’s smart sensors and devices. Learning various driving environments in advance is important to handle unexpected situations that can exist in the real world [...] Read more.
Nowadays, deep learning methods based on a virtual environment are widely applied to research and technology development for autonomous vehicle’s smart sensors and devices. Learning various driving environments in advance is important to handle unexpected situations that can exist in the real world and to continue driving without accident. For training smart sensors and devices of an autonomous vehicle well, a virtual simulator should create scenarios of various possible real-world situations. To create reality-based scenarios, data on the real environment must be collected from a real driving vehicle or a scenario analysis process conducted by experts. However, these two approaches increase the period and the cost of scenario generation as more scenarios are created. This paper proposes a scenario generation method based on deep learning to create scenarios automatically for training autonomous vehicle smart sensors and devices. To generate various scenarios, the proposed method extracts multiple events from a video which is taken on a real road by using deep learning and generates the multiple event in a virtual simulator. First, Faster-region based convolution neural network (Faster-RCNN) extracts bounding boxes of each object in a driving video. Second, the high-level event bounding boxes are calculated. Third, long-term recurrent convolution networks (LRCN) classify each type of extracted event. Finally, all multiple event classification results are combined into one scenario. The generated scenarios can be used in an autonomous driving simulator to teach multiple events that occur during real-world driving. To verify the performance of the proposed scenario generation method, experiments using real driving video data and a virtual simulator were conducted. The results for deep learning model show an accuracy of 95.6%; furthermore, multiple high-level events were extracted, and various scenarios were generated in a virtual simulator for smart sensors and devices of an autonomous vehicle. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

26 pages, 8261 KiB  
Article
Robust Visual Tracking Using Structural Patch Response Map Fusion Based on Complementary Correlation Filter and Color Histogram
by Zhaohui Hao, Guixi Liu, Jiayu Gao and Haoyang Zhang
Sensors 2019, 19(19), 4178; https://doi.org/10.3390/s19194178 - 26 Sep 2019
Cited by 6 | Viewed by 2823
Abstract
A part-based strategy has been applied to visual tracking with demonstrated success in recent years. Different from most existing part-based methods that only employ one type of tracking representation model, in this paper, we propose an effective complementary tracker based on structural patch [...] Read more.
A part-based strategy has been applied to visual tracking with demonstrated success in recent years. Different from most existing part-based methods that only employ one type of tracking representation model, in this paper, we propose an effective complementary tracker based on structural patch response fusion under correlation filter and color histogram models. The proposed method includes two component trackers with complementary merits to adaptively handle illumination variation and deformation. To identify and take full advantage of reliable patches, we present an adaptive hedge algorithm to hedge the responses of patches into a more credible one in each component tracker. In addition, we design different loss metrics of tracked patches in two components to be applied in the proposed hedge algorithm. Finally, we selectively combine the two component trackers at the response maps level with different merging factors according to the confidence of each component tracker. Extensive experimental evaluations on OTB2013, OTB2015, and VOT2016 datasets show outstanding performance of the proposed algorithm contrasted with some state-of-the-art trackers. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

16 pages, 5315 KiB  
Article
Design of a Purely Mechanical Sensor-Controller Integrated System for Walking Assistance on an Ankle-Foot Exoskeleton
by Xiangyang Wang, Sheng Guo, Haibo Qu and Majun Song
Sensors 2019, 19(14), 3196; https://doi.org/10.3390/s19143196 - 19 Jul 2019
Cited by 22 | Viewed by 7263
Abstract
Propulsion during push-off (PO) is a key factor to realize human locomotion. Through the detection of real-time gait stage, assistance could be provided to the human body at the proper time. In most cases, ankle-foot exoskeletons consist of electronic sensors, microprocessors, and actuators. [...] Read more.
Propulsion during push-off (PO) is a key factor to realize human locomotion. Through the detection of real-time gait stage, assistance could be provided to the human body at the proper time. In most cases, ankle-foot exoskeletons consist of electronic sensors, microprocessors, and actuators. Although these three essential elements contribute to fulfilling the function of the detection, control, and energy injection, they result in a huge system that reduces the wearing comfort. To simplify the sensor-controller system and reduce the mass of the exoskeleton, we designed a smart clutch in this paper, which is a sensor-controller integrated system that comprises a sensing part and an executing part. With a spring functioning as an actuator, the whole exoskeleton system is completely made up of mechanical parts and has no external power source. By controlling the engagement of the actuator based on the signal acquired from the sensing part, the proposed clutch enables the ankle-foot exoskeleton (AFE) to provide additional ankle torque during PO, and allows free rotation of the ankle joint during swing phase, thus reducing the metabolic cost of the human body. There are two striking advantages of the designed clutch. On the one hand, the clutch is lightweight and reliable—it resists the possible shock during walking since there is no circuit connection or power in the system. On the other hand, the detection of gait relies on the contact states between human feet and the ground, so the clutch is universal and does not need to be customized for individuals. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

15 pages, 3010 KiB  
Article
A Hybrid CNN–LSTM Algorithm for Online Defect Recognition of CO2 Welding
by Tianyuan Liu, Jinsong Bao, Junliang Wang and Yiming Zhang
Sensors 2018, 18(12), 4369; https://doi.org/10.3390/s18124369 - 10 Dec 2018
Cited by 70 | Viewed by 10745
Abstract
At present, realizing high-quality automatic welding through online monitoring is a research focus in engineering applications. In this paper, a CNN–LSTM algorithm is proposed, which combines the advantages of convolutional neural networks (CNNs) and long short-term memory networks (LSTMs). The CNN–LSTM algorithm establishes [...] Read more.
At present, realizing high-quality automatic welding through online monitoring is a research focus in engineering applications. In this paper, a CNN–LSTM algorithm is proposed, which combines the advantages of convolutional neural networks (CNNs) and long short-term memory networks (LSTMs). The CNN–LSTM algorithm establishes a shallow CNN to extract the primary features of the molten pool image. Then the feature tensor extracted by the CNN is transformed into the feature matrix. Finally, the rows of the feature matrix are fed into the LSTM network for feature fusion. This process realizes the implicit mapping from molten pool images to welding defects. The test results on the self-made molten pool image dataset show that CNN contributes to the overall feasibility of the CNN–LSTM algorithm and LSTM network is the most superior in the feature hybrid stage. The algorithm converges at 300 epochs and the accuracy of defects detection in CO2 welding molten pool is 94%. The processing time of a single image is 0.067 ms, which fully meets the real-time monitoring requirement based on molten pool image. The experimental results on the MNIST and FashionMNIST datasets show that the algorithm is universal and can be used for similar image recognition and classification tasks. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Graphical abstract

14 pages, 3166 KiB  
Article
A MEMS IMU De-Noising Method Using Long Short Term Memory Recurrent Neural Networks (LSTM-RNN)
by Changhui Jiang, Shuai Chen, Yuwei Chen, Boya Zhang, Ziyi Feng, Hui Zhou and Yuming Bo
Sensors 2018, 18(10), 3470; https://doi.org/10.3390/s18103470 - 15 Oct 2018
Cited by 94 | Viewed by 7916
Abstract
Microelectromechanical Systems (MEMS) Inertial Measurement Unit (IMU) containing a three-orthogonal gyroscope and three-orthogonal accelerometer has been widely utilized in position and navigation, due to gradually improved accuracy and its small size and low cost. However, the errors of a MEMS IMU based standalone [...] Read more.
Microelectromechanical Systems (MEMS) Inertial Measurement Unit (IMU) containing a three-orthogonal gyroscope and three-orthogonal accelerometer has been widely utilized in position and navigation, due to gradually improved accuracy and its small size and low cost. However, the errors of a MEMS IMU based standalone Inertial Navigation System (INS) will diverge over time dramatically, since there are various and nonlinear errors contained in the MEMS IMU measurements. Therefore, MEMS INS is usually integrated with a Global Positioning System (GPS) for providing reliable navigation solutions. The GPS receiver is able to generate stable and precise position and time information in open sky environment. However, under signal challenging conditions, for instance dense forests, city canyons, or mountain valleys, if the GPS signal is weak and even is blocked, the GPS receiver will fail to output reliable positioning information, and the integration system will fade to an INS standalone system. A number of effects have been devoted to improving the accuracy of INS, and de-nosing or modelling the random errors contained in the MEMS IMU have been demonstrated to be an effective way of improving MEMS INS performance. In this paper, an Artificial Intelligence (AI) method was proposed to de-noise the MEMS IMU output signals, specifically, a popular variant of Recurrent Neural Network (RNN) Long Short Term Memory (LSTM) RNN was employed to filter the MEMS gyroscope outputs, in which the signals were treated as time series. A MEMS IMU (MSI3200, manufactured by MT Microsystems Company, Shijiazhuang, China) was employed to test the proposed method, a 2 min raw gyroscope data with 400 Hz sampling rate was collected and employed in this testing. The results show that the standard deviation (STD) of the gyroscope data decreased by 60.3%, 37%, and 44.6% respectively compared with raw signals, and on the other way, the three-axis attitude errors decreased by 15.8%, 18.3% and 51.3% individually. Further, compared with an Auto Regressive and Moving Average (ARMA) model with fixed parameters, the STD of the three-axis gyroscope outputs decreased by 42.4%, 21.4% and 21.4%, and the attitude errors decreased by 47.6%, 42.3% and 52.0%. The results indicated that the de-noising scheme was effective for improving MEMS INS accuracy, and the proposed LSTM-RNN method was more preferable in this application. Full article
(This article belongs to the Special Issue Smart Sensors and Devices in Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop