sensors-logo

Journal Browser

Journal Browser

Video Analysis and Tracking Using State-of-the-Art Sensors in 2018-2019

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 May 2019) | Viewed by 20874

Special Issue Editor


E-Mail Website
Guest Editor
Department of Image, Graduate School of Advanced Imaging Science, Chung-Ang University, Seoul 06974, Korea
Interests: image enhancement and restoration; computational imaging; intelligent surveillance systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Object detection, identification, recognition, and tracking from video is a fundamental problem in the computer vision and image processing fields. This task requires an object modelling and motion analysis, and various types of object models have been developed for improved performance. However, a practical object detection and tracking algorithm cannot avoid a number of limitations including: Object occlusion, unstable illumination, object deformation, an insufficient resolution of input video, limited computational resources to meet the video rendering speed, to name a few. Recent developments in state-of-the-art sensors widen the application area of video object tracking by solving the practical limitations.

The objective of this Special Issue is to highlight innovative development of video analysis and tracking technologies related with various state-of-the-art sensors. Topics include, but are not limited to:

  • Detection, identification, recognition and tracking objects using various sensors
  • Multiple camera network or association for very wide range surveillance
  • Development of non-visual sensors, such as time-of-flight sensor, RGB-D camera, IR sensor, RADAR, LIDAR, motion sensor, and acoustic wave sensor, and their applications to video analysis and tracking
  • Image and video enhancement algorithms to improve the quality of visual sensors for video tracking
  • Computational photography and imaging for advanced object detection and tracking
  • Depth estimation and three-dimensional reconstruction for augmented reality (AR) and/or advanced driver assistance systems (ADAS)

Prof. Dr. Joonki Paik
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Video tracking
  • motion estimation
  • optical flow
  • RGB-D camera
  • infra-red (IR) sensor
  • RADAR
  • LIDAR
  • computational photography
  • augmented reality (AR)
  • surveillance

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 26116 KiB  
Article
Monocular Robust Depth Estimation Vision System for Robotic Tasks Interventions in Metallic Targets
by Carlos Veiga Almagro, Mario Di Castro, Giacomo Lunghi, Raúl Marín Prades, Pedro José Sanz Valero, Manuel Ferre Pérez and Alessandro Masi
Sensors 2019, 19(14), 3220; https://doi.org/10.3390/s19143220 - 22 Jul 2019
Cited by 10 | Viewed by 4712
Abstract
Robotic interventions in hazardous scenarios need to pay special attention to safety, as in most cases it is necessary to have an expert operator in the loop. Moreover, the use of a multi-modal Human-Robot Interface allows the user to interact with the robot [...] Read more.
Robotic interventions in hazardous scenarios need to pay special attention to safety, as in most cases it is necessary to have an expert operator in the loop. Moreover, the use of a multi-modal Human-Robot Interface allows the user to interact with the robot using manual control in critical steps, as well as semi-autonomous behaviours in more secure scenarios, by using, for example, object tracking and recognition techniques. This paper describes a novel vision system to track and estimate the depth of metallic targets for robotic interventions. The system has been designed for on-hand monocular cameras, focusing on solving lack of visibility and partial occlusions. This solution has been validated during real interventions at the Centre for Nuclear Research (CERN) accelerator facilities, achieving 95% success in autonomous mode and 100% in a supervised manner. The system increases the safety and efficiency of the robotic operations, reducing the cognitive fatigue of the operator during non-critical mission phases. The integration of such an assistance system is especially important when facing complex (or repetitive) tasks, in order to reduce the work load and accumulated stress of the operator, enhancing the performance and safety of the mission. Full article
Show Figures

Figure 1

22 pages, 14352 KiB  
Article
STAM-CCF: Suspicious Tracking Across Multiple Camera Based on Correlation Filters
by Ruey-Kai Sheu, Mayuresh Pardeshi, Lun-Chi Chen and Shyan-Ming Yuan
Sensors 2019, 19(13), 3016; https://doi.org/10.3390/s19133016 - 09 Jul 2019
Cited by 6 | Viewed by 5973
Abstract
There is strong demand for real-time suspicious tracking across multiple cameras in intelligent video surveillance for public areas, such as universities, airports and factories. Most criminal events show that the nature of suspicious behavior are carried out by un-known people who try to [...] Read more.
There is strong demand for real-time suspicious tracking across multiple cameras in intelligent video surveillance for public areas, such as universities, airports and factories. Most criminal events show that the nature of suspicious behavior are carried out by un-known people who try to hide themselves as much as possible. Previous learning-based studies collected a large volume data set to train a learning model to detect humans across multiple cameras but failed to recognize newcomers. There are also several feature-based studies aimed to identify humans within-camera tracking. It would be very difficult for those methods to get necessary feature information in multi-camera scenarios and scenes. It is the purpose of this study to design and implement a suspicious tracking mechanism across multiple cameras based on correlation filters, called suspicious tracking across multiple cameras based on correlation filters (STAM-CCF). By leveraging the geographical information of cameras and YOLO object detection framework, STAM-CCF adjusts human identification and prevents errors caused by information loss in case of object occlusion and overlapping for within-camera tracking cases. STAM-CCF also introduces a camera correlation model and a two-stage gait recognition strategy to deal with problems of re-identification across multiple cameras. Experimental results show that the proposed method performs well with highly acceptable accuracy. The evidences also show that the proposed STAM-CCF method can continuously recognize suspicious behavior within-camera tracking and re-identify it successfully across multiple cameras. Full article
Show Figures

Figure 1

13 pages, 1009 KiB  
Article
Motion Estimation-Assisted Denoising for an Efficient Combination with an HEVC Encoder
by Seung-Yong Lee and Chae Eun Rhee
Sensors 2019, 19(4), 895; https://doi.org/10.3390/s19040895 - 21 Feb 2019
Cited by 3 | Viewed by 2697
Abstract
Noise, which is commonly generated in low-light environments or by low-performance cameras, is a major cause of the degradation of compression efficiency. In previous studies that attempted to combine a denoise algorithm and a video encoder, denoising was used independently of the code [...] Read more.
Noise, which is commonly generated in low-light environments or by low-performance cameras, is a major cause of the degradation of compression efficiency. In previous studies that attempted to combine a denoise algorithm and a video encoder, denoising was used independently of the code for pre-processing or post-processing. However, this process must be tightly coupled with encoding because noise affects the compression efficiency greatly. In addition, this represents a major opportunity to reduce the computational complexity, because the encoding process and a denoise algorithm have many similarities. In this paper, a simple, add-on denoising scheme is proposed through a combination of high-efficiency video coding (HEVC) and block matching three-dimensional collaborative filtering (BM3D) algorithms. It is known that BM3D has excellent denoise performance but that it is limited in its use due to its high computational complexity. This paper employs motion estimation in HEVC to replace the block matching of BM3D so that most of the time-consuming functions are shared. To overcome the challenging algorithmic differences, the hierarchical structure in HEVC is uniquely utilized. As a result, the computational complexity is drastically reduced while the competitive performance capabilities in terms of coding efficiency and denoising quality are maintained. Full article
Show Figures

Figure 1

20 pages, 4492 KiB  
Article
Content-Aware Focal Plane Selection and Proposals for Object Tracking on Plenoptic Image Sequences
by Dae Hyun Bae, Jae Woo Kim and Jae-Pil Heo
Sensors 2019, 19(1), 48; https://doi.org/10.3390/s19010048 - 22 Dec 2018
Cited by 4 | Viewed by 3520
Abstract
Object tracking is a fundamental problem in computer vision since it is required in many practical applications including video-based surveillance and autonomous vehicles. One of the most challenging scenarios in the problem is when the target object is partially or even fully occluded [...] Read more.
Object tracking is a fundamental problem in computer vision since it is required in many practical applications including video-based surveillance and autonomous vehicles. One of the most challenging scenarios in the problem is when the target object is partially or even fully occluded by other objects. In such cases, most of existing trackers can fail in their task while the object is invisible. Recently, a few techniques have been proposed to tackle the occlusion problem by performing the tracking on plenoptic image sequences. Although they have shown promising results based on the refocusing capability of plenoptic images, there is still room for improvement. In this paper, we propose a novel focus index selection algorithm to identify an optimal focal plane where the tracking should be performed. To determine an optimal focus index, we use a focus measure to find maximally focused plane and a visual similarity to capture the plane where the target object is visible, and its appearance is distinguishably clear. We further use the selected focus index to generate proposals. Since the optimal focus index allows us to estimate the distance between the camera and the target object, we can more accurately guess the scale changes of the object in the image plane. Our proposal algorithm also takes the trajectory of the target object into account. We extensively evaluate our proposed techniques on three plenoptic image sequences by comparing them against the prior tracking methods specialized to the plenoptic image sequences. In experiments, our method provides higher accuracy and robustness over the prior art, and those results confirm that the merits of our proposed algorithms. Full article
Show Figures

Figure 1

20 pages, 5998 KiB  
Article
NCA-Net for Tracking Multiple Objects across Multiple Cameras
by Yihua Tan, Yuan Tai and Shengzhou Xiong
Sensors 2018, 18(10), 3400; https://doi.org/10.3390/s18103400 - 11 Oct 2018
Cited by 4 | Viewed by 3215
Abstract
Tracking multiple pedestrians across multi-camera scenarios is an important part of intelligent video surveillance and has great potential application for public security, which has been an attractive topic in the literature in recent years. In most previous methods, artificial features such as color [...] Read more.
Tracking multiple pedestrians across multi-camera scenarios is an important part of intelligent video surveillance and has great potential application for public security, which has been an attractive topic in the literature in recent years. In most previous methods, artificial features such as color histograms, HOG descriptors and Haar-like feature were adopted to associate objects among different cameras. But there are still many challenges caused by low resolution, variation of illumination, complex background and posture change. In this paper, a feature extraction network named NCA-Net is designed to improve the performance of multiple objects tracking across multiple cameras by avoiding the problem of insufficient robustness caused by hand-crafted features. The network combines features learning and metric learning via a Convolutional Neural Network (CNN) model and the loss function similar to neighborhood components analysis (NCA). The loss function is adapted from the probability loss of NCA aiming at object tracking. The experiments conducted on the NLPR_MCT dataset show that we obtain satisfactory results even with a simple matching operation. In addition, we embed the proposed NCA-Net with two existing tracking systems. The experimental results on the corresponding datasets demonstrate that the extracted features using NCA-net can effectively make improvement on the tracking performance. Full article
Show Figures

Figure 1

Back to TopTop