sensors-logo

Journal Browser

Journal Browser

Aerial Vision and Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (15 November 2020) | Viewed by 17550

Special Issue Editor


E-Mail Website
Guest Editor
Rochester Institute of Technology, Rochester, NY, USA
Interests: computer vision; deep learning; adaptive and robust learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The importance of aerial imagery is steadily increasing due to the proliferation of sensors on Unmanned Aerial Vehicles (UAVs), overhead surveillance cameras, and commercial satellite systems.  The large volume of aerial data necessitates the development of computer vision methods for intelligent processing and robust analysis. Advances in aerial vision have benefits for many applications, including surveillance and security, infrastructure monitoring, precision agriculture, disaster management, and traffic monitoring.  Aerial sensors can generate data in various modalities, including RGB, panchromatic, thermal, multispectral, hyperspectral, SAR, and LIDAR.  While datasets of ground level images, e.g., ImageNet, have been widely utilized for the development of computer vision techniques, much less attention has been devoted to vision for aerial images and other data modalities. This presents an opportunity for new contributions to this dynamic field.  As Guest Editor of this Special Issue, it is my pleasure to invite high-quality manuscripts on computer vision and image analysis of aerial data. Contributions of novel algorithms, systems, and architectures, review articles, benchmarking studies, and new datasets for aerial vision are welcome.

Dr. Andreas Savakis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computer Vision for Aerial Data
  • Object Recognition
  • Object Tracking
  • Activity Recognition
  • Scene Analysis
  • Image Segmentation
  • 3D Reconstruction
  • Change Detection
  • Event Cameras
  • Computational Imaging
  • Efficient Algorithms and Architectures for Aerial Platforms

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 18428 KiB  
Article
The UJI Aerial Librarian Robot: A Quadcopter for Visual Library Inventory and Book Localisation
by Ester Martinez-Martin, Eric Ferrer, Ilia Vasilev and Angel P. del Pobil
Sensors 2021, 21(4), 1079; https://doi.org/10.3390/s21041079 - 4 Feb 2021
Cited by 9 | Viewed by 4901
Abstract
Over time, the field of robotics has provided solutions to automate routine tasks in different scenarios. In particular, libraries are awakening great interest in automated tasks since they are semi-structured environments where machines coexist with humans and several repetitive operations could be automatically [...] Read more.
Over time, the field of robotics has provided solutions to automate routine tasks in different scenarios. In particular, libraries are awakening great interest in automated tasks since they are semi-structured environments where machines coexist with humans and several repetitive operations could be automatically performed. In addition, multirotor aerial vehicles have become very popular in many applications over the past decade, however autonomous flight in confined spaces still presents a number of challenges and the use of small drones has not been reported as an automated inventory device within libraries. This paper presents the UJI aerial librarian robot that leverages computer vision techniques to autonomously self-localize and navigate in a library for automated inventory and book localization. A control strategy to navigate along the library bookcases is presented by using visual markers for self-localization during a visual inspection of bookshelves. An image-based book recognition technique is described that combines computer vision techniques to detect the tags on the book spines, followed by an optical character recognizer (OCR) to convert the book code on the tags into text. These data can be used for library inventory. Misplaced books can be automatically detected, and a particular book can be located within the library. Our quadrotor robot was tested in a real library with promising results. The problems encountered and limitation of the system are discussed, along with its relation to similar applications, such as automated inventory in warehouses. Full article
(This article belongs to the Special Issue Aerial Vision and Sensors)
Show Figures

Figure 1

20 pages, 7692 KiB  
Article
An Improved FBPN-Based Detection Network for Vehicles in Aerial Images
by Bin Wang and Yinjuan Gu
Sensors 2020, 20(17), 4709; https://doi.org/10.3390/s20174709 - 20 Aug 2020
Cited by 12 | Viewed by 2764
Abstract
With the development of artificial intelligence and big data analytics, an increasing number of researchers have tried to use deep-learning technology to train neural networks and achieved great success in the field of vehicle detection. However, as a special domain of object detection, [...] Read more.
With the development of artificial intelligence and big data analytics, an increasing number of researchers have tried to use deep-learning technology to train neural networks and achieved great success in the field of vehicle detection. However, as a special domain of object detection, vehicle detection in aerial images still has made limited progress because of low resolution, complex backgrounds and rotating objects. In this paper, an improved feature-balanced pyramid network (FBPN) has been proposed to enhance the network’s ability to detect small objects. By combining FBPN with modified faster region convolutional neural network (faster-RCNN), a vehicle detection framework for aerial images is proposed. The focal loss function is adopted in the proposed framework to reduce the imbalance between easy and hard samples. The experimental results based on the VEDIA, USCAS-AOD, and DOTA datasets show that the proposed framework outperforms other state-of-the-art vehicle detection algorithms for aerial images. Full article
(This article belongs to the Special Issue Aerial Vision and Sensors)
Show Figures

Figure 1

23 pages, 3116 KiB  
Article
Robust Vision-Based Control of a Rotorcraft UAV for Uncooperative Target Tracking
by Shijie Zhang, Xiangtian Zhao and Botian Zhou
Sensors 2020, 20(12), 3474; https://doi.org/10.3390/s20123474 - 19 Jun 2020
Cited by 8 | Viewed by 3417
Abstract
This paper investigates the problem of using an unmanned aerial vehicle (UAV) to track and hover above an uncooperative target, such as an unvisited area or an object that is newly discovered. A vision-based strategy integrating the metrology and the control is employed [...] Read more.
This paper investigates the problem of using an unmanned aerial vehicle (UAV) to track and hover above an uncooperative target, such as an unvisited area or an object that is newly discovered. A vision-based strategy integrating the metrology and the control is employed to achieve target tracking and hovering observation. First, by introducing a virtual camera frame, the reprojected image features can change independently of the rotational motion of the vehicle. The image centroid and an optimal observation area on the virtual image plane are exploited to regulate the relative horizontal and vertical distance. Then, the optic flow and gyro measurements are utilized to estimate the relative UAV-to-target velocity. Further, a gain-switching proportional-derivative (PD) control scheme is proposed to compensate for the external interference and model uncertainties. The closed-loop system is proven to be exponentially stable, based on the Lyapunov method. Finally, simulation results are presented to demonstrate the effectiveness of the proposed vision-based strategy in both hovering and tracking scenarios. Full article
(This article belongs to the Special Issue Aerial Vision and Sensors)
Show Figures

Figure 1

Review

Jump to: Research

25 pages, 6293 KiB  
Review
Benchmarking Deep Trackers on Aerial Videos
by Abu Md Niamul Taufique, Breton Minnehan and Andreas Savakis
Sensors 2020, 20(2), 547; https://doi.org/10.3390/s20020547 - 19 Jan 2020
Cited by 10 | Viewed by 5569
Abstract
In recent years, deep learning-based visual object trackers have achieved state-of-the-art performance on several visual object tracking benchmarks. However, most tracking benchmarks are focused on ground level videos, whereas aerial tracking presents a new set of challenges. In this paper, we compare ten [...] Read more.
In recent years, deep learning-based visual object trackers have achieved state-of-the-art performance on several visual object tracking benchmarks. However, most tracking benchmarks are focused on ground level videos, whereas aerial tracking presents a new set of challenges. In this paper, we compare ten trackers based on deep learning techniques on four aerial datasets. We choose top performing trackers utilizing different approaches, specifically tracking by detection, discriminative correlation filters, Siamese networks and reinforcement learning. In our experiments, we use a subset of OTB2015 dataset with aerial style videos; the UAV123 dataset without synthetic sequences; the UAV20L dataset, which contains 20 long sequences; and DTB70 dataset as our benchmark datasets. We compare the advantages and disadvantages of different trackers in different tracking situations encountered in aerial data. Our findings indicate that the trackers perform significantly worse in aerial datasets compared to standard ground level videos. We attribute this effect to smaller target size, camera motion, significant camera rotation with respect to the target, out of view movement, and clutter in the form of occlusions or similar looking distractors near tracked object. Full article
(This article belongs to the Special Issue Aerial Vision and Sensors)
Show Figures

Figure 1

Back to TopTop