sensors-logo

Journal Browser

Journal Browser

Sensor Data Fusion and Analysis for Automation Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (20 May 2022) | Viewed by 14707

Special Issue Editor


E-Mail Website
Guest Editor
Advanced Robotics & Intelligent Systems (ARIS) Lab, School of Engineering, University of Guelph, Guelph, ON N1G 2W1, Canada
Interests: artificial intelligence; robotics; sensors and multisensor fusion; wireless sensor networks; control systems; bio-inspired intelligence; machine learning; neural networks; fuzzy systems; computational neuroscience
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Research on sensor data fusion and data analysis for various automation systems has made significant progress in both theoretical investigation and practical applications in many fields, such as sensing, path planning, tracking, and control of various autonomous robotic systems; health monitoring, damage identification, multi-sensor fusion, sensing and signal processing for bridges and roads; and monitoring, information analysis, and decision making of various environmental, structural, agricultural, and manufacturing systems. Various advanced intelligent algorithms and technologies have been developed for accurate information acquisition, effective monitoring, accurate prediction, optimal decision making, and efficient operation for diversified automation systems.

This Special Issue is devoted to new advances and research results on sensor data fusion and analysis for various automation systems in transportation, robotics, agriculture, and industry. It will publish work exploring frontier technology and applications in related fields. The topics of interest for this issue include, but are not limited to the following:

  • Multi-sensor fusion and feature representation
  • Information acquisition and analysis for automation
  • Sensor signal processing and data analysis
  • Big data mining for automation
  • Data fusion based on monitoring for automation
  • Artificial intelligence for automation systems
  • Intelligent robotics and machine vision
  • Machine learning based on prediction and decision making
  • Intelligent control for automation systems

Prof. Dr. Simon X. Yang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Sensor signal processing and data analysis
  • Multi-sensor fusion
  • Big data and data mining
  • Modeling, prediction and decision making
  • Robotics and machine vision

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

18 pages, 4941 KiB  
Article
Distributed Camera Subsystem for Obstacle Detection
by Petr Oščádal, Tomáš Spurný, Tomáš Kot, Stefan Grushko, Jiří Suder, Dominik Heczko, Petr Novák and Zdenko Bobovský
Sensors 2022, 22(12), 4588; https://doi.org/10.3390/s22124588 - 18 Jun 2022
Cited by 4 | Viewed by 1645
Abstract
This work focuses on improving a camera system for sensing a workspace in which dynamic obstacles need to be detected. The currently available state-of-the-art solution (MoveIt!) processes data in a centralized manner from cameras that have to be registered before the system starts. [...] Read more.
This work focuses on improving a camera system for sensing a workspace in which dynamic obstacles need to be detected. The currently available state-of-the-art solution (MoveIt!) processes data in a centralized manner from cameras that have to be registered before the system starts. Our solution enables distributed data processing and dynamic change in the number of sensors at runtime. The distributed camera data processing is implemented using a dedicated control unit on which the filtering is performed by comparing the real and expected depth images. Measurements of the processing speed of all sensor data into a global voxel map were compared between the centralized system (MoveIt!) and the new distributed system as part of a performance benchmark. The distributed system is more flexible in terms of sensitivity to a number of cameras, better framerate stability and the possibility of changing the camera number on the go. The effects of voxel grid size and camera resolution were also compared during the benchmark, where the distributed system showed better results. Finally, the overhead of data transmission in the network was discussed where the distributed system is considerably more efficient. The decentralized system proves to be faster by 38.7% with one camera and 71.5% with four cameras. Full article
(This article belongs to the Special Issue Sensor Data Fusion and Analysis for Automation Systems)
Show Figures

Figure 1

17 pages, 3933 KiB  
Article
Real-Time 3D Reconstruction Method Based on Monocular Vision
by Qingyu Jia, Liang Chang, Baohua Qiang, Shihao Zhang, Wu Xie, Xianyi Yang, Yangchang Sun and Minghao Yang
Sensors 2021, 21(17), 5909; https://doi.org/10.3390/s21175909 - 02 Sep 2021
Cited by 7 | Viewed by 3885
Abstract
Real-time 3D reconstruction is one of the current popular research directions of computer vision, and it has become the core technology in the fields of virtual reality, industrialized automatic systems, and mobile robot path planning. Currently, there are three main problems in the [...] Read more.
Real-time 3D reconstruction is one of the current popular research directions of computer vision, and it has become the core technology in the fields of virtual reality, industrialized automatic systems, and mobile robot path planning. Currently, there are three main problems in the real-time 3D reconstruction field. Firstly, it is expensive. It requires more varied sensors, so it is less convenient. Secondly, the reconstruction speed is slow, and the 3D model cannot be established accurately in real time. Thirdly, the reconstruction error is large, which cannot meet the requirements of scenes with accuracy. For this reason, we propose a real-time 3D reconstruction method based on monocular vision in this paper. Firstly, a single RGB-D camera is used to collect visual information in real time, and the YOLACT++ network is used to identify and segment the visual information to extract part of the important visual information. Secondly, we combine the three stages of depth recovery, depth optimization, and deep fusion to propose a three-dimensional position estimation method based on deep learning for joint coding of visual information. It can reduce the depth error caused by the depth measurement process, and the accurate 3D point values of the segmented image can be obtained directly. Finally, we propose a method based on the limited outlier adjustment of the cluster center distance to optimize the three-dimensional point values obtained above. It improves the real-time reconstruction accuracy and obtains the three-dimensional model of the object in real time. Experimental results show that this method only needs a single RGB-D camera, which is not only low cost and convenient to use, but also significantly improves the speed and accuracy of 3D reconstruction. Full article
(This article belongs to the Special Issue Sensor Data Fusion and Analysis for Automation Systems)
Show Figures

Figure 1

14 pages, 2575 KiB  
Article
Convolutional Neural Networks-Based Object Detection Algorithm by Jointing Semantic Segmentation for Images
by Baohua Qiang, Ruidong Chen, Mingliang Zhou, Yuanchao Pang, Yijie Zhai and Minghao Yang
Sensors 2020, 20(18), 5080; https://doi.org/10.3390/s20185080 - 07 Sep 2020
Cited by 12 | Viewed by 2826
Abstract
In recent years, increasing image data comes from various sensors, and object detection plays a vital role in image understanding. For object detection in complex scenes, more detailed information in the image should be obtained to improve the accuracy of detection task. In [...] Read more.
In recent years, increasing image data comes from various sensors, and object detection plays a vital role in image understanding. For object detection in complex scenes, more detailed information in the image should be obtained to improve the accuracy of detection task. In this paper, we propose an object detection algorithm by jointing semantic segmentation (SSOD) for images. First, we construct a feature extraction network that integrates the hourglass structure network with the attention mechanism layer to extract and fuse multi-scale features to generate high-level features with rich semantic information. Second, the semantic segmentation task is used as an auxiliary task to allow the algorithm to perform multi-task learning. Finally, multi-scale features are used to predict the location and category of the object. The experimental results show that our algorithm substantially enhances object detection performance and consistently outperforms other three comparison algorithms, and the detection speed can reach real-time, which can be used for real-time detection. Full article
(This article belongs to the Special Issue Sensor Data Fusion and Analysis for Automation Systems)
Show Figures

Figure 1

19 pages, 4973 KiB  
Article
Modeling of Stochastic Wind Based on Operational Flight Data Using Karhunen–Loève Expansion Method
by Xiaolong Wang, Lukas Beller, Claudia Czado and Florian Holzapfel
Sensors 2020, 20(16), 4634; https://doi.org/10.3390/s20164634 - 18 Aug 2020
Cited by 1 | Viewed by 2393
Abstract
Wind has a significant influence on the operational flight safety. To quantify the influence of the wind characteristics, a wind series generator is required in simulations. This paper presents a method to model the stochastic wind based on operational flight data using the [...] Read more.
Wind has a significant influence on the operational flight safety. To quantify the influence of the wind characteristics, a wind series generator is required in simulations. This paper presents a method to model the stochastic wind based on operational flight data using the Karhunen–Loève expansion. The proposed wind model allows us to generate new realizations of wind series, which follow the original statistical characteristics. To improve the accuracy of this wind model, a vine copula is used in this paper to capture the high dimensional dependence among the random variables in the expansions. Besides, the proposed stochastic model based on the Karhunen–Loève expansion is compared with the well-known von Karman turbulence model based on the spectral representation in this paper. Modeling results of turbulence data validate that the Karhunen–Loève expansion and the spectral representation coincide in the stationary process. Furthermore, construction results of the non-stationary wind process from operational flights show that the generated wind series have a good match in the statistical characteristics with the raw data. The proposed stochastic wind model allows us to integrate the new wind series into the Monte Carlo Simulation for quantitative assessments. Full article
(This article belongs to the Special Issue Sensor Data Fusion and Analysis for Automation Systems)
Show Figures

Figure 1

Other

Jump to: Research

14 pages, 3085 KiB  
Letter
Active Learning Plus Deep Learning Can Establish Cost-Effective and Robust Model for Multichannel Image: A Case on Hyperspectral Image Classification
by Fangyu Shi, Zhaodi Wang, Menghan Hu and Guangtao Zhai
Sensors 2020, 20(17), 4975; https://doi.org/10.3390/s20174975 - 02 Sep 2020
Cited by 5 | Viewed by 2568
Abstract
Relying on large scale labeled datasets, deep learning has achieved good performance in image classification tasks. In agricultural and biological engineering, image annotation is time-consuming and expensive. It also requires annotators to have technical skills in specific areas. Obtaining the ground truth is [...] Read more.
Relying on large scale labeled datasets, deep learning has achieved good performance in image classification tasks. In agricultural and biological engineering, image annotation is time-consuming and expensive. It also requires annotators to have technical skills in specific areas. Obtaining the ground truth is difficult because natural images are expensive. In addition, images in these areas are usually stored as multichannel images, such as computed tomography (CT) images, magnetic resonance images (MRI), and hyperspectral images (HSI). In this paper, we present a framework using active learning and deep learning for multichannel image classification. We use three active learning algorithms, including least confidence, margin sampling, and entropy, as the selection criteria. Based on this framework, we further introduce an “image pool” to make full advantage of images generated by data augmentation. To prove the availability of the proposed framework, we present a case study on agricultural hyperspectral image classification. The results show that the proposed framework achieves better performance compared with the deep learning model. Manual annotation of all the training sets achieves an encouraging accuracy. In comparison, using active learning algorithm of entropy and image pool achieves a similar accuracy with only part of the whole training set manually annotated. In practical application, the proposed framework can remarkably reduce labeling effort during the model development and upadting processes, and can be applied to multichannel image classification in agricultural and biological engineering. Full article
(This article belongs to the Special Issue Sensor Data Fusion and Analysis for Automation Systems)
Show Figures

Figure 1

Back to TopTop