E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Recent Trends in UAV Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (31 December 2016)

Special Issue Editors

Guest Editor
Prof. Farid Melgani

Department of Information Engineering and Computer Science, University of Trento, I-38123 Trento, Italy
Website | E-Mail
Interests: UAV and multi/hyperspectral remote sensing; image processing and analysis; change detection; machine learning; pattern recognition; computer vision
Guest Editor
Dr. Francesco Nex

Department of Earth Observation Science (EOS), ITC Faculty, University of Twente PO Box 217, 7500 AE, Enschede, The Netherlands
Website | E-Mail
Interests: geometric and radiometric sensors; sensor fusion; calibration of imageries; signal/image processing; mission planning; navigation and position/orientation; Machine learning; Simultaneous localization and mapping; Regulations, and economic impact; agriculture; geosciences; urban area; architecture; monitoring/change detection; education

Special Issue Information

Dear Colleagues,

Nowadays, Unmanned Aerial Vehicles (UAVs) are involved in a wide range of remote sensing applications. UAVs are rapid, efficient and flexible acquisition systems. They represent a valid alternative or a complementary solution to satellite or airborne sensors especially for extremely high resolution acquisitions on small or inaccessible areas. Thanks to their timely, cheap and extremely rich data acquisition capacity with respect to other acquisition systems, UAVs are emerging as innovative and cost-effective devices to perform numerous urban and environmental survey tasks.

This Special Issue aims at collecting new developments and methodologies, best practices and applications of UAVs for remote sensing. We welcome submissions which provide the community with the most recent advancements on all aspects of UAV remote sensing, including but not limited to:

  • Data processing and photogrammetry
  • Data analysis (image classification, feature extraction, target detection, change detection, biophysical parameter estimation, etc.)
  • Platforms and new sensors on board (multispectral, hyperspectral, thermal, lidar, SAR, gas or radioactivity sensors, etc.)
  • Data fusion: integration of UAV imagery with satellite, aerial or terrestrial data, integration of heterogeneous data captured by UAVs
  • Real time processing / collaborative and fleet of UAVs applied to remote sensing
  • Onboard data storage and transmission
  • Applications (3D mapping, urban monitoring, precision farming, forestry, disaster prevention, assessment and monitoring, search and rescue, security, archaeology, industrial plant inspection, etc.)
  • Review of national and international regulations
  • Any use of UAVs related to remote sensing

Farid Melgani
Francesco Nex
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (24 papers)

View options order results:
result details:
Displaying articles 1-24
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle A Low Cost UWB Based Solution for Direct Georeferencing UAV Photogrammetry
Remote Sens. 2017, 9(5), 414; doi:10.3390/rs9050414
Received: 1 January 2017 / Revised: 20 April 2017 / Accepted: 21 April 2017 / Published: 27 April 2017
PDF Full-text (2140 KB) | HTML Full-text | XML Full-text
Abstract
Thanks to their flexibility and availability at reduced costs, Unmanned Aerial Vehicles (UAVs) have been recently used on a wide range of applications and conditions. Among these, they can play an important role in monitoring critical events (e.g., disaster monitoring) when the presence
[...] Read more.
Thanks to their flexibility and availability at reduced costs, Unmanned Aerial Vehicles (UAVs) have been recently used on a wide range of applications and conditions. Among these, they can play an important role in monitoring critical events (e.g., disaster monitoring) when the presence of humans close to the scene shall be avoided for safety reasons, in precision farming and surveying. Despite the very large number of possible applications, their usage is mainly limited by the availability of the Global Navigation Satellite System (GNSS) in the considered environment: indeed, GNSS is of fundamental importance in order to reduce positioning error derived by the drift of (low-cost) Micro-Electro-Mechanical Systems (MEMS) internal sensors. In order to make the usage of UAVs possible even in critical environments (when GNSS is not available or not reliable, e.g., close to mountains or in city centers, close to high buildings), this paper considers the use of a low cost Ultra Wide-Band (UWB) system as the positioning method. Furthermore, assuming the use of a calibrated camera, UWB positioning is exploited to achieve metric reconstruction on a local coordinate system. Once the georeferenced position of at least three points (e.g., positions of three UWB devices) is known, then georeferencing can be obtained, as well. The proposed approach is validated on a specific case study, the reconstruction of the façade of a university building. Average error on 90 check points distributed over the building façade, obtained by georeferencing by means of the georeferenced positions of four UWB devices at fixed positions, is 0.29 m. For comparison, the average error obtained by using four ground control points is 0.18 m. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Figure 1

Open AccessArticle Combining Unmanned Aerial Systems and Sensor Networks for Earth Observation
Remote Sens. 2017, 9(4), 336; doi:10.3390/rs9040336
Received: 30 December 2016 / Revised: 22 March 2017 / Accepted: 27 March 2017 / Published: 1 April 2017
PDF Full-text (8276 KB) | HTML Full-text | XML Full-text
Abstract
The combination of remote sensing and sensor network technologies can provide unprecedented earth observation capabilities, and has attracted high R&D interest in recent years. However, the procedures and tools used for deployment, geo-referenciation and collection of logged measurements in the case of traditional
[...] Read more.
The combination of remote sensing and sensor network technologies can provide unprecedented earth observation capabilities, and has attracted high R&D interest in recent years. However, the procedures and tools used for deployment, geo-referenciation and collection of logged measurements in the case of traditional environmental monitoring stations are not suitable when dealing with hundreds or thousands of sensor nodes deployed in an environment of tenths of hectares. This paper presents a scheme based on Unmanned Aerial Systems that intends to give a step forward in the use of sensor networks for environment observation. The presented scheme includes methods, tools and technologies to solve sensor node deployment, localization and collection of measurements. The presented scheme is scalable—it is suitable for medium–large environments with a high number of sensor nodes—and highly autonomous—it is operated with very low human intervention. This paper presents the scheme including its main components, techniques and technologies, and describes its implementation and evaluation in field experiments. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Deep Learning Approach for Car Detection in UAV Imagery
Remote Sens. 2017, 9(4), 312; doi:10.3390/rs9040312
Received: 31 December 2016 / Revised: 12 March 2017 / Accepted: 24 March 2017 / Published: 27 March 2017
PDF Full-text (6534 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an automatic solution to the problem of detecting and counting cars in unmanned aerial vehicle (UAV) images. This is a challenging task given the very high spatial resolution of UAV images (on the order of a few centimetres) and the
[...] Read more.
This paper presents an automatic solution to the problem of detecting and counting cars in unmanned aerial vehicle (UAV) images. This is a challenging task given the very high spatial resolution of UAV images (on the order of a few centimetres) and the extremely high level of detail, which require suitable automatic analysis methods. Our proposed method begins by segmenting the input image into small homogeneous regions, which can be used as candidate locations for car detection. Next, a window is extracted around each region, and deep learning is used to mine highly descriptive features from these windows. We use a deep convolutional neural network (CNN) system that is already pre-trained on huge auxiliary data as a feature extraction tool, combined with a linear support vector machine (SVM) classifier to classify regions into “car” and “no-car” classes. The final step is devoted to a fine-tuning procedure which performs morphological dilation to smooth the detected regions and fill any holes. In addition, small isolated regions are analysed further using a few sliding rectangular windows to locate cars more accurately and remove false positives. To evaluate our method, experiments were conducted on a challenging set of real UAV images acquired over an urban area. The experimental results have proven that the proposed method outperforms the state-of-the-art methods, both in terms of accuracy and computational time. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Figure 1

Open AccessArticle Detection of Flavescence dorée Grapevine Disease Using Unmanned Aerial Vehicle (UAV) Multispectral Imagery
Remote Sens. 2017, 9(4), 308; doi:10.3390/rs9040308
Received: 30 December 2016 / Revised: 13 March 2017 / Accepted: 15 March 2017 / Published: 24 March 2017
PDF Full-text (24728 KB) | HTML Full-text | XML Full-text
Abstract
Flavescence dorée is a grapevine disease affecting European vineyards which has severe economic consequences and containing its spread is therefore considered as a major challenge for viticulture. Flavescence dorée is subject to mandatory pest control including removal of the infected vines and, in
[...] Read more.
Flavescence dorée is a grapevine disease affecting European vineyards which has severe economic consequences and containing its spread is therefore considered as a major challenge for viticulture. Flavescence dorée is subject to mandatory pest control including removal of the infected vines and, in this context, automatic detection of Flavescence dorée symptomatic vines by unmanned aerial vehicle (UAV) remote sensing could constitute a key diagnosis instrument for growers. The objective of this paper is to evaluate the feasibility of discriminating the Flavescence dorée symptoms in red and white cultivars from healthy vine vegetation using UAV multispectral imagery. Exhaustive ground truth data and UAV multispectral imagery (visible and near-infrared domain) have been acquired in September 2015 over four selected vineyards in Southwest France. Spectral signatures of healthy and symptomatic plants were studied with a set of 20 variables computed from the UAV images (spectral bands, vegetation indices and biophysical parameters) using univariate and multivariate classification approaches. Best results were achieved with red cultivars (both using univariate and multivariate approaches). For white cultivars, results were not satisfactory either for the univariate or the multivariate. Nevertheless, external accuracy assessment show that despite problems of Flavescence dorée and healthy pixel misclassification, an operational Flavescence dorée mapping technique using UAV-based imagery can still be proposed. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Automatic Object-Oriented, Spectral-Spatial Feature Extraction Driven by Tobler’s First Law of Geography for Very High Resolution Aerial Imagery Classification
Remote Sens. 2017, 9(3), 285; doi:10.3390/rs9030285
Received: 1 November 2016 / Revised: 18 February 2017 / Accepted: 12 March 2017 / Published: 17 March 2017
PDF Full-text (9408 KB) | HTML Full-text | XML Full-text
Abstract
Aerial image classification has become popular and has attracted extensive research efforts in recent decades. The main challenge lies in its very high spatial resolution but relatively insufficient spectral information. To this end, spatial-spectral feature extraction is a popular strategy for classification. However,
[...] Read more.
Aerial image classification has become popular and has attracted extensive research efforts in recent decades. The main challenge lies in its very high spatial resolution but relatively insufficient spectral information. To this end, spatial-spectral feature extraction is a popular strategy for classification. However, parameter determination for that feature extraction is usually time-consuming and depends excessively on experience. In this paper, an automatic spatial feature extraction approach based on image raster and segmental vector data cross-analysis is proposed for the classification of very high spatial resolution (VHSR) aerial imagery. First, multi-resolution segmentation is used to generate strongly homogeneous image objects and extract corresponding vectors. Then, to automatically explore the region of a ground target, two rules, which are derived from Tobler’s First Law of Geography (TFL) and a topological relationship of vector data, are integrated to constrain the extension of a region around a central object. Third, the shape and size of the extended region are described. A final classification map is achieved through a supervised classifier using shape, size, and spectral features. Experiments on three real aerial images of VHSR (0.1 to 0.32 m) are done to evaluate effectiveness and robustness of the proposed approach. Comparisons to state-of-the-art methods demonstrate the superiority of the proposed method in VHSR image classification. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle UAV-Based Oblique Photogrammetry for Outdoor Data Acquisition and Offsite Visual Inspection of Transmission Line
Remote Sens. 2017, 9(3), 278; doi:10.3390/rs9030278
Received: 12 September 2016 / Revised: 7 March 2017 / Accepted: 14 March 2017 / Published: 16 March 2017
PDF Full-text (7571 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Regular inspection of transmission lines is an essential work, which has been implemented by either labor intensive or very expensive approaches. 3D reconstruction could be an alternative solution to satisfy the need for accurate and low cost inspection. This paper exploits the use
[...] Read more.
Regular inspection of transmission lines is an essential work, which has been implemented by either labor intensive or very expensive approaches. 3D reconstruction could be an alternative solution to satisfy the need for accurate and low cost inspection. This paper exploits the use of an unmanned aerial vehicle (UAV) for outdoor data acquisition and conducts accuracy assessment tests to explore potential usage for offsite inspection of transmission lines. Firstly, an oblique photogrammetric system, integrating with a cheap double-camera imaging system, an onboard dual-frequency GNSS (Global Navigation Satellite System) receiver and a ground master GNSS station in fixed position, is designed to acquire images with ground resolutions better than 3 cm. Secondly, an image orientation method, considering oblique imaging geometry of the dual-camera system, is applied to detect enough tie-points to construct stable image connection in both along-track and across-track directions. To achieve the best geo-referencing accuracy and evaluate model measurement precision, signalized ground control points (GCPs) and model key points have been surveyed. Finally, accuracy assessment tests, including absolute orientation precision and relative model precision, have been conducted with different GCP configurations. Experiments show that images captured by the designed photogrammetric system contain enough information of power pylons from different viewpoints. Quantitative assessment demonstrates that, with fewer GCPs for image orientation, the absolute and relative accuracies of image orientation and model measurement are better than 0.3 and 0.2 m, respectively. For regular inspection of transmission lines, the proposed solution can to some extent be an alternative method with competitive accuracy, lower operational complexity and considerable gains in economic cost. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Detection and Segmentation of Vine Canopy in Ultra-High Spatial Resolution RGB Imagery Obtained from Unmanned Aerial Vehicle (UAV): A Case Study in a Commercial Vineyard
Remote Sens. 2017, 9(3), 268; doi:10.3390/rs9030268
Received: 31 December 2016 / Revised: 2 March 2017 / Accepted: 12 March 2017 / Published: 15 March 2017
PDF Full-text (2162 KB) | HTML Full-text | XML Full-text
Abstract
The use of Unmanned Aerial Vehicles (UAVs) in viticulture permits the capture of aerial Red-Green-Blue (RGB) images with an ultra-high spatial resolution. Recent studies have demonstrated that RGB images can be used to monitor spatial variability of vine biophysical parameters. However, for estimating
[...] Read more.
The use of Unmanned Aerial Vehicles (UAVs) in viticulture permits the capture of aerial Red-Green-Blue (RGB) images with an ultra-high spatial resolution. Recent studies have demonstrated that RGB images can be used to monitor spatial variability of vine biophysical parameters. However, for estimating these parameters, accurate and automated segmentation methods are required to extract relevant information from RGB images. Manual segmentation of aerial images is a laborious and time-consuming process. Traditional classification methods have shown satisfactory results in the segmentation of RGB images for diverse applications and surfaces, however, in the case of commercial vineyards, it is necessary to consider some particularities inherent to canopy size in the vertical trellis systems (VSP) such as shadow effect and different soil conditions in inter-rows (mixed information of soil and weeds). Therefore, the objective of this study was to compare the performance of four classification methods (K-means, Artificial Neural Networks (ANN), Random Forest (RForest) and Spectral Indices (SI)) to detect canopy in a vineyard trained on VSP. Six flights were carried out from post-flowering to harvest in a commercial vineyard cv. Carménère using a low-cost UAV equipped with a conventional RGB camera. The results show that the ANN and the simple SI method complemented with the Otsu method for thresholding presented the best performance for the detection of the vine canopy with high overall accuracy values for all study days. Spectral indices presented the best performance in the detection of Plant class (Vine canopy) with an overall accuracy of around 0.99. However, considering the performance pixel by pixel, the Spectral indices are not able to discriminate between Soil and Shadow class. The best performance in the classification of three classes (Plant, Soil, and Shadow) of vineyard RGB images, was obtained when the SI values were used as input data in trained methods (ANN and RForest), reaching overall accuracy values around 0.98 with high sensitivity values for the three classes. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Combining Spectral Data and a DSM from UAS-Images for Improved Classification of Non-Submerged Aquatic Vegetation
Remote Sens. 2017, 9(3), 247; doi:10.3390/rs9030247
Received: 30 November 2016 / Revised: 17 February 2017 / Accepted: 1 March 2017 / Published: 7 March 2017
PDF Full-text (2174 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Monitoring of aquatic vegetation is an important component in the assessment of freshwater ecosystems. Remote sensing with unmanned aircraft systems (UASs) can provide sub-decimetre-resolution aerial images and is a useful tool for detailed vegetation mapping. In a previous study, non-submerged aquatic vegetation was
[...] Read more.
Monitoring of aquatic vegetation is an important component in the assessment of freshwater ecosystems. Remote sensing with unmanned aircraft systems (UASs) can provide sub-decimetre-resolution aerial images and is a useful tool for detailed vegetation mapping. In a previous study, non-submerged aquatic vegetation was successfully mapped using automated classification of spectral and textural features from a true-colour UAS-orthoimage with 5-cm pixels. In the present study, height data from a digital surface model (DSM) created from overlapping UAS-images has been incorporated together with the spectral and textural features from the UAS-orthoimage to test if classification accuracy can be improved further. We studied two levels of thematic detail: (a) Growth forms including the classes of water, nymphaeid, and helophyte; and (b) dominant taxa including seven vegetation classes. We hypothesized that the incorporation of height data together with spectral and textural features would increase classification accuracy as compared to using spectral and textural features alone, at both levels of thematic detail. We tested our hypothesis at five test sites (100 m × 100 m each) with varying vegetation complexity and image quality using automated object-based image analysis in combination with Random Forest classification. Overall accuracy at each of the five test sites ranged from 78% to 87% at the growth-form level and from 66% to 85% at the dominant-taxon level. In comparison to using spectral and textural features alone, the inclusion of height data increased the overall accuracy significantly by 4%–21% for growth-forms and 3%–30% for dominant taxa. The biggest improvement gained by adding height data was observed at the test site with the most complex vegetation. Height data derived from UAS-images has a large potential to efficiently increase the accuracy of automated classification of non-submerged aquatic vegetation, indicating good possibilities for operative mapping. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Individual Tree Detection and Classification with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging
Remote Sens. 2017, 9(3), 185; doi:10.3390/rs9030185
Received: 8 December 2016 / Revised: 16 February 2017 / Accepted: 18 February 2017 / Published: 23 February 2017
PDF Full-text (11211 KB) | HTML Full-text | XML Full-text
Abstract
Small unmanned aerial vehicle (UAV) based remote sensing is a rapidly evolving technology. Novel sensors and methods are entering the market, offering completely new possibilities to carry out remote sensing tasks. Three-dimensional (3D) hyperspectral remote sensing is a novel and powerful technology that
[...] Read more.
Small unmanned aerial vehicle (UAV) based remote sensing is a rapidly evolving technology. Novel sensors and methods are entering the market, offering completely new possibilities to carry out remote sensing tasks. Three-dimensional (3D) hyperspectral remote sensing is a novel and powerful technology that has recently become available to small UAVs. This study investigated the performance of UAV-based photogrammetry and hyperspectral imaging in individual tree detection and tree species classification in boreal forests. Eleven test sites with 4151 reference trees representing various tree species and developmental stages were collected in June 2014 using a UAV remote sensing system equipped with a frame format hyperspectral camera and an RGB camera in highly variable weather conditions. Dense point clouds were measured photogrammetrically by automatic image matching using high resolution RGB images with a 5 cm point interval. Spectral features were obtained from the hyperspectral image blocks, the large radiometric variation of which was compensated for by using a novel approach based on radiometric block adjustment with the support of in-flight irradiance observations. Spectral and 3D point cloud features were used in the classification experiment with various classifiers. The best results were obtained with Random Forest and Multilayer Perceptron (MLP) which both gave 95% overall accuracies and an F-score of 0.93. Accuracy of individual tree identification from the photogrammetric point clouds varied between 40% and 95%, depending on the characteristics of the area. Challenges in reference measurements might also have reduced these numbers. Results were promising, indicating that hyperspectral 3D remote sensing was operational from a UAV platform even in very difficult conditions. These novel methods are expected to provide a powerful tool for automating various environmental close-range remote sensing tasks in the very near future. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Accuracy Assessment of Digital Surface Models from Unmanned Aerial Vehicles’ Imagery on Glaciers
Remote Sens. 2017, 9(2), 186; doi:10.3390/rs9020186
Received: 30 November 2016 / Revised: 14 February 2017 / Accepted: 16 February 2017 / Published: 22 February 2017
PDF Full-text (5369 KB) | HTML Full-text | XML Full-text
Abstract
The use of Unmanned Aerial Vehicles (UAV) for photogrammetric surveying has recently gained enormous popularity. Images taken from UAVs are used for generating Digital Surface Models (DSMs) and orthorectified images. In the glaciological context, these can serve for quantifying ice volume change or
[...] Read more.
The use of Unmanned Aerial Vehicles (UAV) for photogrammetric surveying has recently gained enormous popularity. Images taken from UAVs are used for generating Digital Surface Models (DSMs) and orthorectified images. In the glaciological context, these can serve for quantifying ice volume change or glacier motion. This study focuses on the accuracy of UAV-derived DSMs. In particular, we analyze the influence of the number and disposition of Ground Control Points (GCPs) needed for georeferencing the derived products. A total of 1321 different DSMs were generated from eight surveys distributed on three glaciers in the Swiss Alps during winter, summer and autumn. The vertical and horizontal accuracy was assessed by cross-validation with thousands of validation points measured with a Global Positioning System. Our results show that the accuracy increases asymptotically with increasing number of GCPs until a certain density of GCPs is reached. We call this the optimal GCP density. The results indicate that DSMs built with this optimal GCP density have a vertical (horizontal) accuracy ranging between 0.10 and 0.25 m (0.03 and 0.09 m) across all datasets. In addition, the impact of the GCP distribution on the DSM accuracy was investigated. The local accuracy of a DSM decreases when increasing the distance to the closest GCP, typically at a rate of 0.09 m per 100-m distance. The impact of the glacier’s surface texture (ice or snow) was also addressed. The results show that besides cases with a surface covered by fresh snow, the surface texture does not significantly influence the DSM accuracy. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Testing Accuracy and Repeatability of UAV Blocks Oriented with GNSS-Supported Aerial Triangulation
Remote Sens. 2017, 9(2), 172; doi:10.3390/rs9020172
Received: 15 December 2016 / Revised: 6 February 2017 / Accepted: 15 February 2017 / Published: 18 February 2017
PDF Full-text (3064 KB) | HTML Full-text | XML Full-text
Abstract
UAV Photogrammetry today already enjoys a largely automated and efficient data processing pipeline. However, the goal of dispensing with Ground Control Points looks closer, as dual-frequency GNSS receivers are put on board. This paper reports on the accuracy in object space obtained by
[...] Read more.
UAV Photogrammetry today already enjoys a largely automated and efficient data processing pipeline. However, the goal of dispensing with Ground Control Points looks closer, as dual-frequency GNSS receivers are put on board. This paper reports on the accuracy in object space obtained by GNSS-supported orientation of four photogrammetric blocks, acquired by a senseFly eBee RTK and all flown according to the same flight plan at 80 m above ground over a test field. Differential corrections were sent to the eBee from a nearby ground station. Block orientation has been performed with three software packages: PhotoScan, Pix4D and MicMac. The influence on the checkpoint errors of the precision given to the projection centers has been studied: in most cases, values in Z are critical. Without GCP, the RTK solution consistently achieves a RMSE of about 2–3 cm on the horizontal coordinates of checkpoints. In elevation, the RMSE varies from flight to flight, from 2 to 10 cm. Using at least one GCP, with all packages and all test flights, the geocoding accuracy of GNSS-supported orientation is almost as good as that of a traditional GCP orientation in XY and only slightly worse in Z. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Contour Detection for UAV-Based Cadastral Mapping
Remote Sens. 2017, 9(2), 171; doi:10.3390/rs9020171
Received: 2 December 2016 / Revised: 8 February 2017 / Accepted: 15 February 2017 / Published: 18 February 2017
PDF Full-text (5383 KB) | HTML Full-text | XML Full-text
Abstract
Unmanned aerial vehicles (UAVs) provide a flexible and low-cost solution for the acquisition of high-resolution data. The potential of high-resolution UAV imagery to create and update cadastral maps is being increasingly investigated. Existing procedures generally involve substantial fieldwork and many manual processes. Arguably,
[...] Read more.
Unmanned aerial vehicles (UAVs) provide a flexible and low-cost solution for the acquisition of high-resolution data. The potential of high-resolution UAV imagery to create and update cadastral maps is being increasingly investigated. Existing procedures generally involve substantial fieldwork and many manual processes. Arguably, multiple parts of UAV-based cadastral mapping workflows could be automated. Specifically, as many cadastral boundaries coincide with visible boundaries, they could be extracted automatically using image analysis methods. This study investigates the transferability of gPb contour detection, a state-of-the-art computer vision method, to remotely sensed UAV images and UAV-based cadastral mapping. Results show that the approach is transferable to UAV data and automated cadastral mapping: object contours are comprehensively detected at completeness and correctness rates of up to 80%. The detection quality is optimal when the entire scene is covered with one orthoimage, due to the global optimization of gPb contour detection. However, a balance between high completeness and correctness is hard to achieve, so a combination with area-based segmentation and further object knowledge is proposed. The localization quality exhibits the usual dependency on ground resolution. The approach has the potential to accelerate the process of general boundary delineation during the creation and updating of cadastral maps. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Using UAS Hyperspatial RGB Imagery for Identifying Beach Zones along the South Texas Coast
Remote Sens. 2017, 9(2), 159; doi:10.3390/rs9020159
Received: 18 November 2016 / Revised: 25 January 2017 / Accepted: 10 February 2017 / Published: 15 February 2017
PDF Full-text (3885 KB) | HTML Full-text | XML Full-text
Abstract
Shoreline information is fundamental for understanding coastal dynamics and for implementing environmental policy. The analysis of shoreline variability usually uses a group of shoreline indicators visibly discernible in coastal imagery, such as the seaward vegetation line, wet beach/dry beach line, and instantaneous water
[...] Read more.
Shoreline information is fundamental for understanding coastal dynamics and for implementing environmental policy. The analysis of shoreline variability usually uses a group of shoreline indicators visibly discernible in coastal imagery, such as the seaward vegetation line, wet beach/dry beach line, and instantaneous water line. These indicators partition a beach into four zones: vegetated land, dry sand or debris, wet sand, and water. Unmanned aircraft system (UAS) remote sensing that can acquire imagery with sub-decimeter pixel size provides opportunities to map these four beach zones. This paper attempts to delineate four beach zones based on UAS hyperspatial RGB (Red, Green, and Blue) imagery, namely imagery of sub-decimeter pixel size, and feature textures. Besides the RGB images, this paper also uses USGS (the United States Geological Survey) Munsell HSV (Hue, Saturation, and Value) and CIELUV (the CIE 1976 (L*, u*, v*) color space) images transformed from an RGB image. The four beach zones are identified based on the Gray Level Co-Occurrence Matrix (GLCM) and Local Binary Pattern (LBP) textures. Experiments were conducted with South Padre Island photos acquired by a Nikon D80 camera mounted on the US-16 UAS during March 2014. The results show that USGS Munsell hue can separate land and water reliably. GLCM and LBP textures can slightly improve classification accuracies by both unsupervised and supervised classification techniques. The experiments also indicate that we could reach acceptable results on different photos while using training data from another photo for site-specific UAS remote sensing. The findings imply that parallel processing of classification is feasible. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle A New Propagation Channel Synthesizer for UAVs in the Presence of Tree Canopies
Remote Sens. 2017, 9(2), 151; doi:10.3390/rs9020151
Received: 25 November 2016 / Revised: 15 January 2017 / Accepted: 9 February 2017 / Published: 13 February 2017
PDF Full-text (5340 KB) | HTML Full-text | XML Full-text
Abstract
Following the increasing popularity of unmanned aerial vehicles (UAVs) for remote sensing applications, the reliable operation under a number of various radio wave propagation conditions is required. Assuming common outdoor scenarios, the presence of trees in the vicinity of a UAV or its
[...] Read more.
Following the increasing popularity of unmanned aerial vehicles (UAVs) for remote sensing applications, the reliable operation under a number of various radio wave propagation conditions is required. Assuming common outdoor scenarios, the presence of trees in the vicinity of a UAV or its ground terminal is highly probable. However, such a scenario is very difficult to address from a radio wave propagation point of view. Recently, an approach based on physical optics (PO) and the multiple scattering theory (MST) has been proposed by the authors, which enables fast and straightforward predictions of tree-scattered fields at microwave frequencies. In this paper, this approach is developed further into a generative model capable of providing both the narrowband and wideband synthetic time series of received/transmitted signals which are needed for both UAV communications and remote sensing applications in the presence of scattering from tree canopies. The proposed channel synthesizer is validated using both an artificially-generated scenario and actual experimental dataset. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Using 3D Point Clouds Derived from UAV RGB Imagery to Describe Vineyard 3D Macro-Structure
Remote Sens. 2017, 9(2), 111; doi:10.3390/rs9020111
Received: 27 November 2016 / Revised: 13 January 2017 / Accepted: 23 January 2017 / Published: 28 January 2017
PDF Full-text (6582 KB) | HTML Full-text | XML Full-text
Abstract
In the context of precision viticulture, remote sensing in the optical domain offers a potential way to map crop structure characteristics, such as vegetation cover fraction, row orientation or leaf area index, that are later used in decision support tools. A method based
[...] Read more.
In the context of precision viticulture, remote sensing in the optical domain offers a potential way to map crop structure characteristics, such as vegetation cover fraction, row orientation or leaf area index, that are later used in decision support tools. A method based on the RGB color model imagery acquired with an unmanned aerial vehicle (UAV) is proposed to describe the vineyard 3D macro-structure. The dense point cloud is first extracted from the overlapping RGB images acquired over the vineyard using the Structure from Motion algorithm implemented in the Agisoft PhotoScan software. Then, the terrain altitude extracted from the dense point cloud is used to get the 2D distribution of height of the vineyard. By applying a threshold on the height, the rows are separated from the row spacing. Row height, width and spacing are then estimated as well as the vineyard cover fraction and the percentage of missing segments along the rows. Results are compared with ground measurements with root mean square error (RMSE) = 9.8 cm for row height, RMSE = 8.7 cm for row width and RMSE = 7 cm for row spacing. The row width, cover fraction, as well as the percentage of missing row segments, appear to be sensitive to the quality of the dense point cloud. Optimal flight configuration and camera setting are therefore mandatory to access these characteristics with a good accuracy. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessFeature PaperArticle A Convolutional Neural Network Approach for Assisting Avalanche Search and Rescue Operations with UAV Imagery
Remote Sens. 2017, 9(2), 100; doi:10.3390/rs9020100
Received: 11 November 2016 / Revised: 29 December 2016 / Accepted: 14 January 2017 / Published: 24 January 2017
PDF Full-text (6738 KB) | HTML Full-text | XML Full-text
Abstract
Following an avalanche, one of the factors that affect victims’ chance of survival is the speed with which they are located and dug out. Rescue teams use techniques like trained rescue dogs and electronic transceivers to locate victims. However, the resources and time
[...] Read more.
Following an avalanche, one of the factors that affect victims’ chance of survival is the speed with which they are located and dug out. Rescue teams use techniques like trained rescue dogs and electronic transceivers to locate victims. However, the resources and time required to deploy rescue teams are major bottlenecks that decrease a victim’s chance of survival. Advances in the field of Unmanned Aerial Vehicles (UAVs) have enabled the use of flying robots equipped with sensors like optical cameras to assess the damage caused by natural or manmade disasters and locate victims in the debris. In this paper, we propose assisting avalanche search and rescue (SAR) operations with UAVs fitted with vision cameras. The sequence of images of the avalanche debris captured by the UAV is processed with a pre-trained Convolutional Neural Network (CNN) to extract discriminative features. A trained linear Support Vector Machine (SVM) is integrated at the top of the CNN to detect objects of interest. Moreover, we introduce a pre-processing method to increase the detection rate and a post-processing method based on a Hidden Markov Model to improve the prediction performance of the classifier. Experimental results conducted on two different datasets at different levels of resolution show that the detection performance increases with an increase in resolution, while the computation time increases. Additionally, they also suggest that a significant decrease in processing time can be achieved thanks to the pre-processing step. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessFeature PaperArticle First Results of a Tandem Terrestrial-Unmanned Aerial mapKITE System with Kinematic Ground Control Points for Corridor Mapping
Remote Sens. 2017, 9(1), 60; doi:10.3390/rs9010060
Received: 30 November 2016 / Revised: 1 January 2017 / Accepted: 4 January 2017 / Published: 11 January 2017
PDF Full-text (4696 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In this article, we report about the first results of the mapKITE system, a tandem terrestrial-aerial concept for geodata acquisition and processing, obtained in corridor mapping missions. The system combines an Unmanned Aerial System (UAS) and a Terrestrial Mobile Mapping System (TMMS) operated
[...] Read more.
In this article, we report about the first results of the mapKITE system, a tandem terrestrial-aerial concept for geodata acquisition and processing, obtained in corridor mapping missions. The system combines an Unmanned Aerial System (UAS) and a Terrestrial Mobile Mapping System (TMMS) operated in a singular way: real-time waypoints are computed from the TMMS platform and sent to the UAS in a follow-me scheme. This approach leads to a simultaneous acquisition of aerial-plus-ground geodata and, moreover, opens the door to an advanced post-processing approach for sensor orientation. The current contribution focuses on analysing the impact of the new, dynamic Kinematic Ground Control Points (KGCPs), which arise inherently from the mapKITE paradigm, as an alternative to conventional, costly Ground Control Points (GCPs). In the frame of a mapKITE campaign carried out in June 2016, we present results entailing sensor orientation and calibration accuracy assessment through ground check points, and precision and correlation analysis of self-calibration parameters’ estimation. Conclusions indicate that the mapKITE concept eliminates the need for GCPs when using only KGCPs plus a couple of GCPs at each corridor end, achieving check point horizontal accuracy of μ E , N 1.7 px (3.4 cm) and μ h 4.3 px (8.6 cm). Since obtained from a simplified version of the system, these preliminary results are encouraging from a future perspective. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Optimizing Multiple Kernel Learning for the Classification of UAV Data
Remote Sens. 2016, 8(12), 1025; doi:10.3390/rs8121025
Received: 26 October 2016 / Revised: 8 December 2016 / Accepted: 9 December 2016 / Published: 16 December 2016
Cited by 1 | PDF Full-text (3249 KB) | HTML Full-text | XML Full-text
Abstract
Unmanned Aerial Vehicles (UAVs) are capable of providing high-quality orthoimagery and 3D information in the form of point clouds at a relatively low cost. Their increasing popularity stresses the necessity of understanding which algorithms are especially suited for processing the data obtained from
[...] Read more.
Unmanned Aerial Vehicles (UAVs) are capable of providing high-quality orthoimagery and 3D information in the form of point clouds at a relatively low cost. Their increasing popularity stresses the necessity of understanding which algorithms are especially suited for processing the data obtained from UAVs. The features that are extracted from the point cloud and imagery have different statistical characteristics and can be considered as heterogeneous, which motivates the use of Multiple Kernel Learning (MKL) for classification problems. In this paper, we illustrate the utility of applying MKL for the classification of heterogeneous features obtained from UAV data through a case study of an informal settlement in Kigali, Rwanda. Results indicate that MKL can achieve a classification accuracy of 90.6%, a 5.2% increase over a standard single-kernel Support Vector Machine (SVM). A comparison of seven MKL methods indicates that linearly-weighted kernel combinations based on simple heuristics are competitive with respect to computationally-complex, non-linear kernel combination methods. We further underline the importance of utilizing appropriate feature grouping strategies for MKL, which has not been directly addressed in the literature, and we propose a novel, automated feature grouping method that achieves a high classification accuracy for various MKL methods. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Biomass Estimation Using 3D Data from Unmanned Aerial Vehicle Imagery in a Tropical Woodland
Remote Sens. 2016, 8(11), 968; doi:10.3390/rs8110968
Received: 27 August 2016 / Revised: 14 November 2016 / Accepted: 16 November 2016 / Published: 23 November 2016
Cited by 1 | PDF Full-text (3301 KB) | HTML Full-text | XML Full-text
Abstract
Application of 3D data derived from images captured using unmanned aerial vehicles (UAVs) in forest biomass estimation has shown great potential in reducing costs and improving the estimates. However, such data have never been tested in miombo woodlands. UAV-based biomass estimation relies on
[...] Read more.
Application of 3D data derived from images captured using unmanned aerial vehicles (UAVs) in forest biomass estimation has shown great potential in reducing costs and improving the estimates. However, such data have never been tested in miombo woodlands. UAV-based biomass estimation relies on the availability of reliable digital terrain models (DTMs). The main objective of this study was to evaluate application of 3D data derived from UAV imagery in biomass estimation and to compare impacts of DTMs generated based on different methods and parameter settings. Biomass was modeled using data acquired from 107 sample plots in a forest reserve in miombo woodlands of Malawi. The results indicated that there are no significant differences (p = 0.985) between tested DTMs except for that based on shuttle radar topography mission (SRTM). A model developed using unsupervised ground filtering based on a grid search approach, had the smallest root mean square error (RMSE) of 46.7% of a mean biomass value of 38.99 Mg·ha−1. Amongst the independent variables, maximum canopy height (Hmax) was the most frequently selected. In addition, all models included spectral variables incorporating the three color bands red, green and blue. The study has demonstrated that UAV acquired image data can be used in biomass estimation in miombo woodlands using automatically generated DTMs. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle An Image-Based Approach for the Co-Registration of Multi-Temporal UAV Image Datasets
Remote Sens. 2016, 8(9), 779; doi:10.3390/rs8090779
Received: 4 July 2016 / Revised: 3 September 2016 / Accepted: 13 September 2016 / Published: 21 September 2016
Cited by 6 | PDF Full-text (14744 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
During the past years, UAVs (Unmanned Aerial Vehicles) became very popular as low-cost image acquisition platforms since they allow for high resolution and repetitive flights in a flexible way. One application is to monitor dynamic scenes. However, the fully automatic co-registration of the
[...] Read more.
During the past years, UAVs (Unmanned Aerial Vehicles) became very popular as low-cost image acquisition platforms since they allow for high resolution and repetitive flights in a flexible way. One application is to monitor dynamic scenes. However, the fully automatic co-registration of the acquired multi-temporal data still remains an open issue. Most UAVs are not able to provide accurate direct image georeferencing and the co-registration process is mostly performed with the manual introduction of ground control points (GCPs), which is time consuming, costly and sometimes not possible at all. A new technique to automate the co-registration of multi-temporal high resolution image blocks without the use of GCPs is investigated in this paper. The image orientation is initially performed on a reference epoch and the registration of the following datasets is achieved including some anchor images from the reference data. The interior and exterior orientation parameters of the anchor images are then fixed in order to constrain the Bundle Block Adjustment of the slave epoch to be aligned with the reference one. The study involved the use of two different datasets acquired over a construction site and a post-earthquake damaged area. Different tests have been performed to assess the registration procedure using both a manual and an automatic approach for the selection of anchor images. The tests have shown that the procedure provides results comparable to the traditional GCP-based strategy and both the manual and automatic selection of the anchor images can provide reliable results. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Comparison of Manual Mapping and Automated Object-Based Image Analysis of Non-Submerged Aquatic Vegetation from Very-High-Resolution UAS Images
Remote Sens. 2016, 8(9), 724; doi:10.3390/rs8090724
Received: 3 May 2016 / Revised: 22 August 2016 / Accepted: 29 August 2016 / Published: 1 September 2016
Cited by 2 | PDF Full-text (2614 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Aquatic vegetation has important ecological and regulatory functions and should be monitored in order to detect ecosystem changes. Field data collection is often costly and time-consuming; remote sensing with unmanned aircraft systems (UASs) provides aerial images with sub-decimetre resolution and offers a potential
[...] Read more.
Aquatic vegetation has important ecological and regulatory functions and should be monitored in order to detect ecosystem changes. Field data collection is often costly and time-consuming; remote sensing with unmanned aircraft systems (UASs) provides aerial images with sub-decimetre resolution and offers a potential data source for vegetation mapping. In a manual mapping approach, UAS true-colour images with 5-cm-resolution pixels allowed for the identification of non-submerged aquatic vegetation at the species level. However, manual mapping is labour-intensive, and while automated classification methods are available, they have rarely been evaluated for aquatic vegetation, particularly at the scale of individual vegetation stands. We evaluated classification accuracy and time-efficiency for mapping non-submerged aquatic vegetation at three levels of detail at five test sites (100 m × 100 m) differing in vegetation complexity. We used object-based image analysis and tested two classification methods (threshold classification and Random Forest) using eCognition®. The automated classification results were compared to results from manual mapping. Using threshold classification, overall accuracy at the five test sites ranged from 93% to 99% for the water-versus-vegetation level and from 62% to 90% for the growth-form level. Using Random Forest classification, overall accuracy ranged from 56% to 94% for the growth-form level and from 52% to 75% for the dominant-taxon level. Overall classification accuracy decreased with increasing vegetation complexity. In test sites with more complex vegetation, automated classification was more time-efficient than manual mapping. This study demonstrated that automated classification of non-submerged aquatic vegetation from true-colour UAS images was feasible, indicating good potential for operative mapping of aquatic vegetation. When choosing the preferred mapping method (manual versus automated) the desired level of thematic detail and the required accuracy for the mapping task needs to be considered. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Quantifying the Effect of Aerial Imagery Resolution in Automated Hydromorphological River Characterisation
Remote Sens. 2016, 8(8), 650; doi:10.3390/rs8080650
Received: 5 June 2016 / Revised: 23 July 2016 / Accepted: 3 August 2016 / Published: 10 August 2016
PDF Full-text (7397 KB) | HTML Full-text | XML Full-text
Abstract
Existing regulatory frameworks aiming to improve the quality of rivers place hydromorphology as a key factor in the assessment of hydrology, morphology and river continuity. The majority of available methods for hydromorphological characterisation rely on the identification of homogeneous areas (i.e., features) of
[...] Read more.
Existing regulatory frameworks aiming to improve the quality of rivers place hydromorphology as a key factor in the assessment of hydrology, morphology and river continuity. The majority of available methods for hydromorphological characterisation rely on the identification of homogeneous areas (i.e., features) of flow, vegetation and substrate. For that purpose, aerial imagery is used to identify existing features through either visual observation or automated classification techniques. There is evidence to believe that the success in feature identification relies on the resolution of the imagery used. However, little effort has yet been made to quantify the uncertainty in feature identification associated with the resolution of the aerial imagery. This paper contributes to address this gap in knowledge by contrasting results in automated hydromorphological feature identification from unmanned aerial vehicles (UAV) aerial imagery captured at three resolutions (2.5 cm, 5 cm and 10 cm) along a 1.4 km river reach. The results show that resolution plays a key role in the accuracy and variety of features identified, with larger identification errors observed for riffles and side bars. This in turn has an impact on the ecological characterisation of the river reach. The research shows that UAV technology could be essential for unbiased hydromorphological assessment. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Open AccessArticle Stable Imaging and Accuracy Issues of Low-Altitude Unmanned Aerial Vehicle Photogrammetry Systems
Remote Sens. 2016, 8(4), 316; doi:10.3390/rs8040316
Received: 16 January 2016 / Revised: 25 March 2016 / Accepted: 6 April 2016 / Published: 9 April 2016
Cited by 1 | PDF Full-text (8985 KB) | HTML Full-text | XML Full-text
Abstract
Stable imaging of an unmanned aerial vehicle (UAV) photogrammetry system is an important issue that affects the data processing and application of the system. Compared with traditional aerial images, the large rotation of roll, pitch, and yaw angles of UAV images decrease image
[...] Read more.
Stable imaging of an unmanned aerial vehicle (UAV) photogrammetry system is an important issue that affects the data processing and application of the system. Compared with traditional aerial images, the large rotation of roll, pitch, and yaw angles of UAV images decrease image quality and result in image deformation, thereby affecting the ground resolution, overlaps, and the consistency of the stereo models. These factors also cause difficulties in automatic tie point matching, image orientation, and accuracy of aerial triangulation (AT). The issues of large-angle photography of UAV photogrammetry system are discussed and analyzed quantitatively in this paper, and a simple and lightweight three-axis stabilization platform that works with a low-precision integrated inertial navigation system and a three-axis mechanical platform is used to reduce this problem. An experiment was carried out with an airship as the flight platform. Another experimental dataset, which was acquired by the same flight platform without a stabilization platform, was utilized for a comparative test. Experimental results show that the system can effectively isolate the swing of the flying platform. To ensure objective and reliable results, another group of experimental datasets, which were acquired using a fixed-wing UAV platform, was also analyzed. Statistical results of the experimental datasets confirm that stable imaging of a UAV platform can help improve the quality of aerial photography imagery and the accuracy of AT, and potentially improve the application of images acquired by a UAV. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Review

Jump to: Research

Open AccessReview Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping
Remote Sens. 2016, 8(8), 689; doi:10.3390/rs8080689
Received: 30 June 2016 / Revised: 3 August 2016 / Accepted: 11 August 2016 / Published: 22 August 2016
Cited by 5 | PDF Full-text (5051 KB) | HTML Full-text | XML Full-text
Abstract
Unmanned Aerial Vehicles (UAVs) have emerged as a rapid, low-cost and flexible acquisition system that appears feasible for application in cadastral mapping: high-resolution imagery, acquired using UAVs, enables a new approach for defining property boundaries. However, UAV-derived data are arguably not exploited to
[...] Read more.
Unmanned Aerial Vehicles (UAVs) have emerged as a rapid, low-cost and flexible acquisition system that appears feasible for application in cadastral mapping: high-resolution imagery, acquired using UAVs, enables a new approach for defining property boundaries. However, UAV-derived data are arguably not exploited to its full potential: based on UAV data, cadastral boundaries are visually detected and manually digitized. A workflow that automatically extracts boundary features from UAV data could increase the pace of current mapping procedures. This review introduces a workflow considered applicable for automated boundary delineation from UAV data. This is done by reviewing approaches for feature extraction from various application fields and synthesizing these into a hypothetical generalized cadastral workflow. The workflow consists of preprocessing, image segmentation, line extraction, contour generation and postprocessing. The review lists example methods per workflow step—including a description, trialed implementation, and a list of case studies applying individual methods. Furthermore, accuracy assessment methods are outlined. Advantages and drawbacks of each approach are discussed in terms of their applicability on UAV data. This review can serve as a basis for future work on the implementation of most suitable methods in a UAV-based cadastral mapping workflow. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Figures

Journal Contact

MDPI AG
Remote Sensing Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Remote Sensing Edit a special issue Review for Remote Sensing
loading...
Back to Top