sensors-logo

Journal Browser

Journal Browser

Emerging Algorithms and Applications in Vision Sensors System based on Artificial Intelligence

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 September 2018) | Viewed by 69697

Special Issue Editors


E-Mail Website
Guest Editor
School of Electrical and Information Engineering, Tianjin University, Tianjin 300000, China
Interests: smart ocean system; intelligent monitoring; sensing network; Internet of Things; marine information processing; vision sensors
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Loughborough University, Loughborough, UK
Interests: Robotics, Unmanned Aerial Vehicles, Driverless Vehicles, Networked Systems, Ambient Assisted living, Computer Vision, AI and Pattern Recognition, Machine learning and Deep Learning

E-Mail Website
Guest Editor
Department of Electrical Engineering and Computer Science, Embry-Riddle Aeronautical University, Daytona Beach, FL 32114, USA
Interests: AI/machine learning; cyber-physical systems; cybersecurity and privacy; unmanned aircraft systems; communications and networking
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON K1N 6N5, Canada
Interests: Internet of Things; cybersecurity; crowdsensing and social networks; artificial intelligence; connected vehicles; digital health (d-health); sustainable ICT
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) is an emerging issue in the research of vision sensor systems, which can simulate the ability of human intelligence processing. With the development of computer science, artificial intelligence is being gradually applied to all aspects of human life. At present, AI-related technologies have come out of the laboratory and have made breakthroughs in many fields of application. Among them, machine vision is a branch of the rapid development of artificial intelligence, which is based on vision sensor systems. Machine vision is at the forefront of interdisciplinary research, and it is different from the study of human or animal vision, it uses geometric, physical and learning techniques to build the model, using statistical methods to deal with data. Vision sensor systems refer to a computer achieving the human visual functions of perception, recognition and understanding in an objective world of three-dimensional scenes. In short, vision sensor systems aim to use machines instead of human eyes for measurement and judgment, e.g., 3D reconstruction, facial recognition, image retrieval, and so on. The application of AI in vision sensor systems can reduce a system's requirements for the environment, such as background, occlusion, and location, and can enhance the adaptability and processing speed of the complex environment of the system. Therefore, vision sensor systems must be combined with AI to achieve a better development. 

The goal of this special issue will be dedicated to development, challenges, and current status in AI for vision sensors system. The requested research articles should be related to AI technologies for supporting machine vision, and artificial intelligence inspired aspects of target detection, tracking and recognition system, image analysis and processing techniques. The manuscripts should not be submitted simultaneously for publication elsewhere. Submissions of high-quality manuscripts describing future potentials or on-going work are sought. 

Topics of interest include, but are not limited to:

  • AI technologies for supporting vision sensors system
  • Target detection, tracking and recognition in real-time dynamic system based on vision sensors
  • Face recognition, iris recognition of vision sensors system based on AI
  • License plate detection and recognition based on vision sensors system
  • AI inspired deep learning algorithms, especially in unsupervised and semi-supervised learning
  • AI inspired image/video retrieval and classification in vision sensors system
  • AI inspired image/video quality evaluation based on the features in vision sensors system
  • Research and application of AI inspired visual features extraction based on vision sensors system
  • Intelligent information search system based on vision sensors
  • Intelligent processing of visual information based on vision sensors system

Dr. Jiachen Yang
Dr. Qinggang Meng
Dr. Houbing Song 
Dr. Burak Kantarci
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 6135 KiB  
Article
An Intelligent Vision Based Sensing Approach for Spraying Droplets Deposition Detection
by Linhui Wang, Xuejun Yue, Yongxin Liu, Jian Wang and Huihui Wang
Sensors 2019, 19(4), 933; https://doi.org/10.3390/s19040933 - 22 Feb 2019
Cited by 8 | Viewed by 3499
Abstract
The rapid development of vision sensor based on artificial intelligence (AI) is reforming industries and making our world smarter. Among these trends, it is of great significance to adapt AI technologies into the intelligent agricultural management. In smart agricultural aviation spraying, the droplets’ [...] Read more.
The rapid development of vision sensor based on artificial intelligence (AI) is reforming industries and making our world smarter. Among these trends, it is of great significance to adapt AI technologies into the intelligent agricultural management. In smart agricultural aviation spraying, the droplets’ distribution and deposition are important indexes for estimating effectiveness in plant protection process. However, conventional approaches are problematic, they lack adaptivity to environmental changes, and consumes non-reusable test materials. One example is that the machine vision algorithms they employ can’t guarantee that the division of adhesive droplets thereby disabling the accurate measurement of critical parameters. To alleviate these problems, we put forward an intelligent visual droplet detection node which can adapt to the environment illumination change. Then, we propose a modified marker controllable watershed segmentation algorithm to segment those adhesive droplets, and calculate their characteristic parameters on the basis of the segmentation results, including number, coverage, coverage density, etc. Finally, we use the intelligent node to detect droplets, and then expound the situation that the droplet region is effectively segmented and marked. The intelligent node has better adaptability and robustness even under the condition of illumination changing. The large-scale distributed detection result indicates that our approach has good consistency with the non-recyclable water-sensitive paper approach. Our approach provides an intelligent and environmental friendly way of tests for spraying techniques, especially for plant protection with Unmanned Aerial Vehicles. Full article
Show Figures

Figure 1

21 pages, 6368 KiB  
Article
Enhanced Clean-In-Place Monitoring Using Ultraviolet Induced Fluorescence and Neural Networks
by Alessandro Simeone, Bin Deng, Nicholas Watson and Elliot Woolley
Sensors 2018, 18(11), 3742; https://doi.org/10.3390/s18113742 - 02 Nov 2018
Cited by 12 | Viewed by 4321
Abstract
Clean-in-place (CIP) processes are extensively used to clean industrial equipment without the need for disassembly. In food manufacturing, cleaning can account for up to 70% of water use and is also a heavy user of energy and chemicals. Due to a current lack [...] Read more.
Clean-in-place (CIP) processes are extensively used to clean industrial equipment without the need for disassembly. In food manufacturing, cleaning can account for up to 70% of water use and is also a heavy user of energy and chemicals. Due to a current lack of real-time in-process monitoring, the non-optimal control of the cleaning process parameters and durations result in excessive resource consumption and periods of non-productivity. In this paper, an optical monitoring system is designed and realized to assess the amount of fouling material remaining in process tanks, and to predict the required cleaning time. An experimental campaign of CIP tests was carried out utilizing white chocolate as fouling medium. During the experiments, an image acquisition system endowed with a digital camera and ultraviolet light source was employed to collect digital images from the process tank. Diverse image segmentation techniques were considered to develop an image processing procedure with the aim of assessing the area of surface fouling and the fouling volume throughout the cleaning process. An intelligent decision-making support system utilizing nonlinear autoregressive models with exogenous inputs (NARX) Neural Network was configured, trained and tested to predict the cleaning time based on the image processing results. Results are discussed in terms of prediction accuracy and a comparative study on computation time against different image resolutions is reported. The potential benefits of the system for resource and time efficiency in food manufacturing are highlighted. Full article
Show Figures

Figure 1

14 pages, 9657 KiB  
Article
Parallel Computation of EM Backscattering from Large Three-Dimensional Sea Surface with CUDA
by Longxiang Linghu, Jiaji Wu, Zhensen Wu and Xiaobing Wang
Sensors 2018, 18(11), 3656; https://doi.org/10.3390/s18113656 - 28 Oct 2018
Cited by 13 | Viewed by 3046
Abstract
An efficient parallel computation using graphics processing units (GPUs) is developed for studying the electromagnetic (EM) backscattering characteristics from a large three-dimensional sea surface. A slope-deterministic composite scattering model (SDCSM), which combines the quasi-specular scattering of Kirchhoff Approximation (KA) and Bragg scattering of [...] Read more.
An efficient parallel computation using graphics processing units (GPUs) is developed for studying the electromagnetic (EM) backscattering characteristics from a large three-dimensional sea surface. A slope-deterministic composite scattering model (SDCSM), which combines the quasi-specular scattering of Kirchhoff Approximation (KA) and Bragg scattering of the two-scale model (TSM), is utilized to calculate the normalized radar cross section (NRCS in dB) of the sea surface. However, with the improvement of the radar resolution, there will be millions of triangular facets on the large sea surface which make the computation of NRCS time-consuming and inefficient. In this paper, the feasibility of using NVIDIA Tesla K80 GPU with four compute unified device architecture (CUDA) optimization strategies to improve the calculation efficiency of EM backscattering from a large sea surface is verified. The whole GPU-accelerated SDCSM calculation takes full advantage of coalesced memory access, constant memory, fast math compiler options, and asynchronous data transfer. The impact of block size and the number of registers per thread is analyzed to further improve the computation speed. A significant speedup of 748.26x can be obtained utilizing a single GPU for the GPU-based SDCSM implemented compared with the CPU-based counterpart performing on the Intel(R) Core(TM) i5-3450. Full article
Show Figures

Figure 1

22 pages, 10960 KiB  
Article
Adaptive Object Tracking via Multi-Angle Analysis Collaboration
by Wanli Xue, Zhiyong Feng, Chao Xu, Zhaopeng Meng and Chengwei Zhang
Sensors 2018, 18(11), 3606; https://doi.org/10.3390/s18113606 - 24 Oct 2018
Cited by 2 | Viewed by 2774
Abstract
Although tracking research has achieved excellent performance in mathematical angles, it is still meaningful to analyze tracking problems from multiple perspectives. This motivation not only promotes the independence of tracking research but also increases the flexibility of practical applications. This paper presents a [...] Read more.
Although tracking research has achieved excellent performance in mathematical angles, it is still meaningful to analyze tracking problems from multiple perspectives. This motivation not only promotes the independence of tracking research but also increases the flexibility of practical applications. This paper presents a significant tracking framework based on the multi-dimensional state–action space reinforcement learning, termed as multi-angle analysis collaboration tracking (MACT). MACT is comprised of a basic tracking framework and a strategic framework which assists the former. Especially, the strategic framework is extensible and currently includes feature selection strategy (FSS) and movement trend strategy (MTS). These strategies are abstracted from the multi-angle analysis of tracking problems (observer’s attention and object’s motion). The content of the analysis corresponds to the specific actions in the multidimensional action space. Concretely, the tracker, regarded as an agent, is trained with Q-learning algorithm and ϵ -greedy exploration strategy, where we adopt a customized rewarding function to encourage robust object tracking. Numerous contrast experimental evaluations on the OTB50 benchmark demonstrate the effectiveness of the strategies and improvement in speed and accuracy of MACT tracker. Full article
Show Figures

Figure 1

14 pages, 1857 KiB  
Article
A Combinatorial Solution to Point Symbol Recognition
by Yining Quan, Yuanyuan Shi, Qiguang Miao and Yutao Qi
Sensors 2018, 18(10), 3403; https://doi.org/10.3390/s18103403 - 11 Oct 2018
Cited by 5 | Viewed by 2574
Abstract
Recent work has shown that recognizing point symbols is an essential task in the field of map digitization. For the identification of symbols, it is generally necessary to compare the symbols with a specific criterion and find the most similar one with each [...] Read more.
Recent work has shown that recognizing point symbols is an essential task in the field of map digitization. For the identification of symbols, it is generally necessary to compare the symbols with a specific criterion and find the most similar one with each known symbol one by one. Most of the works can only identify a single symbol, a small number of works are to deal with multiple symbols simultaneously with a low recognition accuracy. Given the two deficiencies, this paper proposes a deep transfer learning architecture, where the task is to learn a symbol classifier with AlexNet. For the insufficient dataset, we develop a method for transfer learning that uses a MNIST dataset to pretrain the model, which makes up for the problem of small training dataset and enhances the generalization of the model. Before the recognition process, preprocessing the point symbols in the map to coarse screening out the areas suspected of point symbols. We show a significant improvement over using point symbol images to keep a high performance in being able to deal with many more categories of symbols simultaneously. Full article
Show Figures

Figure 1

20 pages, 10731 KiB  
Article
Design and Implementation of a Stereo Vision System on an Innovative 6DOF Single-Edge Machining Device for Tool Tip Localization and Path Correction
by Luis López-Estrada, Marcelo Fajardo-Pruna, Lidia Sánchez-González, Hilde Pérez, Laura Fernández-Robles and Antonio Vizán
Sensors 2018, 18(9), 3132; https://doi.org/10.3390/s18093132 - 17 Sep 2018
Cited by 7 | Viewed by 4403
Abstract
In the current meso cutting technology industry, the demand for more advanced, accurate and cheaper devices capable of creating a wide range surfaces and geometries is rising. To fulfill this demand, an alternative single point cutting device with 6 degrees of freedom (6DOF) [...] Read more.
In the current meso cutting technology industry, the demand for more advanced, accurate and cheaper devices capable of creating a wide range surfaces and geometries is rising. To fulfill this demand, an alternative single point cutting device with 6 degrees of freedom (6DOF) was developed. Its main advantage compared to milling has been the need for simpler cutting tools that require an easier development. To obtain accurate and precise geometries, the tool tip must be monitored to compensate its position and make the proper corrections on the computer numerical control (CNC). For this, a stereo vision system was carried out as a different approach to the modern available technologies in the industry. In this paper, the artificial intelligence technologies required for implementing such vision system are explored and discussed. The vision system was compared with commercial measurement software Dino Capture, and a dedicated metrological microscope system TESA V-200GL. Experimental analysis were carried out and results were measured in terms of accuracy. The proposed vision system yielded an error equal to ±3 µm in the measurement. Full article
Show Figures

Graphical abstract

24 pages, 7854 KiB  
Article
Hand Tracking and Gesture Recognition Using Lensless Smart Sensors
by Lizy Abraham, Andrea Urru, Niccolò Normani, Mariusz P. Wilk, Michael Walsh and Brendan O’Flynn
Sensors 2018, 18(9), 2834; https://doi.org/10.3390/s18092834 - 28 Aug 2018
Cited by 20 | Viewed by 8326
Abstract
The Lensless Smart Sensor (LSS) developed by Rambus, Inc. is a low-power, low-cost visual sensing technology that captures information-rich optical data in a tiny form factor using a novel approach to optical sensing. The spiral gratings of LSS diffractive grating, coupled with sophisticated [...] Read more.
The Lensless Smart Sensor (LSS) developed by Rambus, Inc. is a low-power, low-cost visual sensing technology that captures information-rich optical data in a tiny form factor using a novel approach to optical sensing. The spiral gratings of LSS diffractive grating, coupled with sophisticated computational algorithms, allow point tracking down to millimeter-level accuracy. This work is focused on developing novel algorithms for the detection of multiple points and thereby enabling hand tracking and gesture recognition using the LSS. The algorithms are formulated based on geometrical and mathematical constraints around the placement of infrared light-emitting diodes (LEDs) on the hand. The developed techniques dynamically adapt the recognition and orientation of the hand and associated gestures. A detailed accuracy analysis for both hand tracking and gesture classification as a function of LED positions is conducted to validate the performance of the system. Our results indicate that the technology is a promising approach, as the current state-of-the-art focuses on human motion tracking that requires highly complex and expensive systems. A wearable, low-power, low-cost system could make a significant impact in this field, as it does not require complex hardware or additional sensors on the tracked segments. Full article
Show Figures

Figure 1

20 pages, 7206 KiB  
Article
Superpixel Segmentation Based Synthetic Classifications with Clear Boundary Information for a Legged Robot
by Yaguang Zhu, Kailu Luo, Chao Ma, Qiong Liu and Bo Jin
Sensors 2018, 18(9), 2808; https://doi.org/10.3390/s18092808 - 25 Aug 2018
Cited by 11 | Viewed by 3170
Abstract
In view of terrain classification of the autonomous multi-legged walking robots, two synthetic classification methods for terrain classification, Simple Linear Iterative Clustering based Support Vector Machine (SLIC-SVM) and Simple Linear Iterative Clustering based SegNet (SLIC-SegNet), are proposed. SLIC-SVM is proposed to solve the [...] Read more.
In view of terrain classification of the autonomous multi-legged walking robots, two synthetic classification methods for terrain classification, Simple Linear Iterative Clustering based Support Vector Machine (SLIC-SVM) and Simple Linear Iterative Clustering based SegNet (SLIC-SegNet), are proposed. SLIC-SVM is proposed to solve the problem that the SVM can only output a single terrain label and fails to identify the mixed terrain. The SLIC-SegNet single-input multi-output terrain classification model is derived to improve the applicability of the terrain classifier. Since terrain classification results of high quality for legged robot use are hard to gain, the SLIC-SegNet obtains the satisfied information without too much effort. A series of experiments on regular terrain, irregular terrain and mixed terrain were conducted to present that both superpixel segmentation based synthetic classification methods can supply reliable mixed terrain classification result with clear boundary information and will put the terrain depending gait selection and path planning of the multi-legged robots into practice. Full article
Show Figures

Figure 1

23 pages, 863 KiB  
Article
Generative vs. Discriminative Recognition Models for Off-Line Arabic Handwriting
by Moftah Elzobi and Ayoub Al-Hamadi
Sensors 2018, 18(9), 2786; https://doi.org/10.3390/s18092786 - 24 Aug 2018
Cited by 4 | Viewed by 2749
Abstract
The majority of handwritten word recognition strategies are constructed on learning-based generative frameworks from letter or word training samples. Theoretically, constructing recognition models through discriminative learning should be the more effective alternative. The primary goal of this research is to compare the performances [...] Read more.
The majority of handwritten word recognition strategies are constructed on learning-based generative frameworks from letter or word training samples. Theoretically, constructing recognition models through discriminative learning should be the more effective alternative. The primary goal of this research is to compare the performances of discriminative and generative recognition strategies, which are described by generatively-trained hidden Markov modeling (HMM), discriminatively-trained conditional random fields (CRF) and discriminatively-trained hidden-state CRF (HCRF). With learning samples obtained from two dissimilar databases, we initially trained and applied an HMM classification scheme. To enable HMM classifiers to effectively reject incorrect and out-of-vocabulary segmentation, we enhance the models with adaptive threshold schemes. Aside from proposing such schemes for HMM classifiers, this research introduces CRF and HCRF classifiers in the recognition of offline Arabic handwritten words. Furthermore, the efficiencies of all three strategies are fully assessed using two dissimilar databases. Recognition outcomes for both words and letters are presented, with the pros and cons of each strategy emphasized. Full article
Show Figures

Figure 1

19 pages, 1996 KiB  
Article
Towards Low-Cost Yet High-Performance Sensor Networks by Deploying a Few Ultra-fast Charging Battery Powered Sensors
by Qing Guo, Wenzheng Xu, Tang Liu, Hongyou Li, Zheng Li and Jian Peng
Sensors 2018, 18(9), 2771; https://doi.org/10.3390/s18092771 - 23 Aug 2018
Cited by 2 | Viewed by 2968
Abstract
The employment of mobile vehicles to charge sensors via wireless energy transfer is a promising technology to maintain the perpetual operation of wireless sensor networks (WSNs). Most existing studies assumed that sensors are powered with off-the-shelf batteries, e.g., Lithium batteries, which are cheap, [...] Read more.
The employment of mobile vehicles to charge sensors via wireless energy transfer is a promising technology to maintain the perpetual operation of wireless sensor networks (WSNs). Most existing studies assumed that sensors are powered with off-the-shelf batteries, e.g., Lithium batteries, which are cheap, but it takes some non-trivial time to fully charge such a battery (e.g., 30–80 min). The long charging time may incur long sensor dead durations, especially when there are many lifetime-critical sensors to be charged. On the other hand, other studies assumed that every sensor is powered with an ultra-fast charging battery, where it only takes some trivial time to replenish such a battery, e.g., 1 min, but the adoption of many ultra-fast sensors will bring about high purchasing cost. In this paper, we propose a novel heterogeneous sensor network model, in which there are only a few ultra-fast sensors and many low-cost off-the-shelf sensors. The deployment cost of the network in the model is low, as the number of ultra-fast sensors is limited. We also have an important observation that we can significantly shorten sensor dead durations by enabling the ultra-fast sensors to relay more data for lifetime-critical off-the-shelf sensors. We then propose a joint charging scheduling and routing allocation algorithm, such that the longest sensor dead duration is minimized. We finally evaluate the performance of the proposed algorithm through extensive simulation experiments. Experimental results show that the proposed algorithm is very promising and the longest sensor dead duration by it is only about 10% of those by existing algorithms. Full article
Show Figures

Figure 1

17 pages, 4298 KiB  
Article
A Novel Semi-Supervised Feature Extraction Method and Its Application in Automotive Assembly Fault Diagnosis Based on Vision Sensor Data
by Xuan Zeng, Shi-Bin Yin, Yin Guo, Jia-Rui Lin and Ji-Gui Zhu
Sensors 2018, 18(8), 2545; https://doi.org/10.3390/s18082545 - 03 Aug 2018
Cited by 14 | Viewed by 3250
Abstract
The fault diagnosis of dimensional variation plays an essential role in the production of an automotive body. However, it is difficult to identify faults based on small labeled sample data using traditional supervised learning methods. The present study proposed a novel feature extraction [...] Read more.
The fault diagnosis of dimensional variation plays an essential role in the production of an automotive body. However, it is difficult to identify faults based on small labeled sample data using traditional supervised learning methods. The present study proposed a novel feature extraction method named, semi-supervised complete kernel Fisher discriminant (SS-CKFDA), and a new fault diagnosis flow for automotive assembly was introduced based on this method. SS-CKFDA is a combination of traditional complete kernel Fisher discriminant (CKFDA) and semi-supervised learning. It adjusts the Fisher criterion with the data global structure extracted from large unlabeled samples. When the number of labeled samples is small, the global structure that exists in the measured data can effectively improve the extraction effects of the projected vector. The experimental results on Tennessee Eastman Process (TEP) data demonstrated that the proposed method can improve diagnostic performance, when compared to other Fisher discriminant algorithms. Finally, the experimental results on the optical coordinate data proves that the method can be applied in the automotive assembly process, and achieve a better performance. Full article
Show Figures

Figure 1

19 pages, 9253 KiB  
Article
Features of X-Band Radar Backscattering Simulation Based on the Ocean Environmental Parameters in China Offshore Seas
by Tao Wu, Zhensen Wu, Jiaji Wu, Gwanggil Jeon and Liwen Ma
Sensors 2018, 18(8), 2450; https://doi.org/10.3390/s18082450 - 28 Jul 2018
Cited by 4 | Viewed by 3796
Abstract
The X-band marine radar has been employed as a remote sensing tool for sea state monitoring. However, there are few literatures about sea spectra considering both the wave parameters and short wind-wave spectra in China Offshore Seas, which are of theoretical and practical [...] Read more.
The X-band marine radar has been employed as a remote sensing tool for sea state monitoring. However, there are few literatures about sea spectra considering both the wave parameters and short wind-wave spectra in China Offshore Seas, which are of theoretical and practical significance. Based on the wave parameters acquired from the European Centre for Medium-Range Weather Forecasts reanalysis data (ERA-Interim reanalysis data) during 36 months from 2015 to 2017, a finite depth sea spectrum considering both wind speeds and ocean environmental parameters is established in this study. The wave spectrum is then built into a modified two-scale model, which can be related to the ocean environmental parameters (wind speeds and wave parameters). The final results are the mean backscattering coefficients over the variety of sea states at a given wind speed. As the model predicts, the monthly maximum backscattering coefficients in different seas change slowly (within 4 dB). In addition, the differences of the backscattering coefficients in different seas are quite small during azimuthal angles of 0° to 90° and 270° to 360° with a relative error within 1.5 dB at low wind speed (5 m/s) and 2 dB at high wind speed (10 m/s). With the method in the paper, a corrected result from the experiment can be achieved based on the relative error analysis in different conditions. Full article
Show Figures

Figure 1

17 pages, 9629 KiB  
Article
Generalized Vision-Based Detection, Identification and Pose Estimation of Lamps for BIM Integration
by Francisco Troncoso-Pastoriza, Javier López-Gómez and Lara Febrero-Garrido
Sensors 2018, 18(7), 2364; https://doi.org/10.3390/s18072364 - 20 Jul 2018
Cited by 7 | Viewed by 3439
Abstract
This paper introduces a comprehensive approach based on computer vision for the automatic detection, identification and pose estimation of lamps in a building using the image and location data from low-cost sensors, allowing the incorporation into the building information modelling (BIM). The procedure [...] Read more.
This paper introduces a comprehensive approach based on computer vision for the automatic detection, identification and pose estimation of lamps in a building using the image and location data from low-cost sensors, allowing the incorporation into the building information modelling (BIM). The procedure is based on our previous work, but the algorithms are substantially improved by generalizing the detection to any light surface type, including polygonal and circular shapes, and refining the BIM integration. We validate the complete methodology with a case study at the Mining and Energy Engineering School and achieve reliable results, increasing the successful real-time processing detections while using low computational resources, leading to an accurate, cost-effective and advanced method. The suitability and the adequacy of the method are proved and concluded. Full article
Show Figures

Figure 1

17 pages, 9848 KiB  
Article
Robust Visual Tracking Based on Adaptive Convolutional Features and Offline Siamese Tracker
by Ximing Zhang and Mingang Wang
Sensors 2018, 18(7), 2359; https://doi.org/10.3390/s18072359 - 20 Jul 2018
Cited by 4 | Viewed by 2951
Abstract
Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. The existing spatially regularized discriminative correlation filter (SRDCF) method learns [...] Read more.
Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. The existing spatially regularized discriminative correlation filter (SRDCF) method learns partial-target information or background information when experiencing rotation, out of view, and heavy occlusion. In order to reduce the computational complexity by creating a novel method to enhance tracking ability, we first introduce an adaptive dimensionality reduction technique to extract the features from the image, based on pre-trained VGG-Net. We then propose an adaptive model update to assign weights during an update procedure depending on the peak-to-sidelobe ratio. Finally, we combine the online SRDCF-based tracker with the offline Siamese tracker to accomplish long term tracking. Experimental results demonstrate that the proposed tracker has satisfactory performance in a wide range of challenging tracking scenarios. Full article
Show Figures

Figure 1

17 pages, 19593 KiB  
Article
Multi-Focus Image Fusion Method for Vision Sensor Systems via Dictionary Learning with Guided Filter
by Qilei Li, Xiaomin Yang, Wei Wu, Kai Liu and Gwanggil Jeon
Sensors 2018, 18(7), 2143; https://doi.org/10.3390/s18072143 - 03 Jul 2018
Cited by 22 | Viewed by 5093
Abstract
Vision sensor systems (VSS) are widely deployed in surveillance, traffic and industrial contexts. A large number of images can be obtained via VSS. Because of the limitations of vision sensors, it is difficult to obtain an all-focused image. This causes difficulties in analyzing [...] Read more.
Vision sensor systems (VSS) are widely deployed in surveillance, traffic and industrial contexts. A large number of images can be obtained via VSS. Because of the limitations of vision sensors, it is difficult to obtain an all-focused image. This causes difficulties in analyzing and understanding the image. In this paper, a novel multi-focus image fusion method (SRGF) is proposed. The proposed method uses sparse coding to classify the focused regions and defocused regions to obtain the focus feature maps. Then, a guided filter (GF) is used to calculate the score maps. An initial decision map can be obtained by comparing the score maps. After that, consistency verification is performed, and the initial decision map is further refined by the guided filter to obtain the final decision map. By performing experiments, our method can obtain satisfying fusion results. This demonstrates that the proposed method is competitive with the existing state-of-the-art fusion methods. Full article
Show Figures

Figure 1

30 pages, 13062 KiB  
Article
LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by Visible Light Camera Sensor on Drone
by Phong Ha Nguyen, Muhammad Arsalan, Ja Hyung Koo, Rizwan Ali Naqvi, Noi Quang Truong and Kang Ryoung Park
Sensors 2018, 18(6), 1703; https://doi.org/10.3390/s18061703 - 24 May 2018
Cited by 56 | Viewed by 11719
Abstract
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple [...] Read more.
Autonomous landing of an unmanned aerial vehicle or a drone is a challenging problem for the robotics research community. Previous researchers have attempted to solve this problem by combining multiple sensors such as global positioning system (GPS) receivers, inertial measurement unit, and multiple camera systems. Although these approaches successfully estimate an unmanned aerial vehicle location during landing, many calibration processes are required to achieve good detection accuracy. In addition, cases where drones operate in heterogeneous areas with no GPS signal should be considered. To overcome these problems, we determined how to safely land a drone in a GPS-denied environment using our remote-marker-based tracking algorithm based on a single visible-light-camera sensor. Instead of using hand-crafted features, our algorithm includes a convolutional neural network named lightDenseYOLO to extract trained features from an input image to predict a marker’s location by visible light camera sensor on drone. Experimental results show that our method significantly outperforms state-of-the-art object trackers both using and not using convolutional neural network in terms of both accuracy and processing time. Full article
Show Figures

Figure 1

Back to TopTop