Next Article in Journal
Assessment of Land Suitability Potential Using Ensemble Approaches of Advanced Multi-Criteria Decision Models and Machine Learning for Wheat Cultivation
Previous Article in Journal
An Improved Method of Mitigating Orbital Errors in Multiple Synthetic-Aperture-Radar Interferometric Pair Analysis for Interseismic Deformation Measurement: Application to the Tuosuo Lake Segment of the Kunlun Fault
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Histogram-Based Edge Detection for River Coastline Mapping Using UAV-Acquired RGB Imagery

Faculty of Earth Sciences and Environmental Management, University of Wrocław, pl. Uniwersytecki 1, 50-137 Wrocław, Poland
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(14), 2565; https://doi.org/10.3390/rs16142565
Submission received: 26 May 2024 / Revised: 3 July 2024 / Accepted: 11 July 2024 / Published: 12 July 2024
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)

Abstract

:
This paper presents a new approach for delineating river coastlines in RGB close-range nadir aerial imagery acquired by unmanned aerial vehicles (UAVs), aimed at facilitating waterline detection through the reduction of the dimensions of a colour space and the use of coarse grids rather than pixels. Since water has uniform brightness, expressed as the value (V) component in the hue, saturation, value (HSV) colour model, the reduction in question is attained by extracting V and investigating its histogram to identify areas where V does not vary considerably. A set of 30 nadir UAV-acquired photos, taken at five different locations in Poland, were used to validate the approach. For 67% of all analysed images (both wide and narrow rivers were photographed), the detection rate was above 50% (with the false hit rate ranged between 5.00% and 61.36%, mean 36.62%). When the analysis was limited to wide rivers, the percentage of images in which detection rate exceeded 50% increased to 80%, and the false hit rates remained similar. Apart from the river width, land cover in the vicinity of the river, as well as uniformity of water colour, were found to be factors which influence the waterline detection performance. Our contribution to the existing knowledge is a rough waterline detection approach based on limited information (only the V band, and grids rather than pixels).

1. Introduction

1.1. Context

Rapid climate changes have been observed in recent years, which are manifested by, among others, increasing frequency of extreme weather and fluvial phenomena, such as droughts, water shortages or floods [1,2]. In this context, flood management and management of limited water resources are very important. In order to effectively warn the population against flooding, it is necessary to conduct regular research in the field of extreme hydrological and meteorological events, as well as to issue accurate hydrological forecasts [3].
Effective forecasting requires access to regularly obtained hydrological measurements. For the purpose of geomorphological mapping, however, spatial data on rivers is necessary. Acquisition of such datasets is often difficult because of limited access to river banks, linear character of rivers and spatial pattern of river system. Though in situ water level measurements are widespread around the world, they offer pointwise data. If spatial distribution of gauges is sparse, predicting water levels along the river can be a challenge. In order to overcome drawbacks of pointwise measurements, remote sensing platforms are utilized, such as for instance unmanned aerial vehicles (hereinafter referred to as UAVs), which are capable of collecting spatial information of high resolution for extensive areas in a simple and cheap way. UAVs are important and well-known research tools that can measure and observe rivers and their dynamics. They have been used to create digital terrain models of river channels [4], analyse morphodynamic changes after flood [5,6], observe river stages [7], and monitor channel morphology [8]. Due to climate change and the associated increase in the intensity and frequency of flooding, overbank flow should be monitored in near-real time [9]. One of the rapidly growing technologies to observe inundation is UAV [10] which—when equipped with water detection tools—can lead to rapid mapping of water surface area [11]. Thus, methods for rough determination of waterline based on most popular RGB photos taken by UAVs may potentially increase flood awareness.

1.2. Related Work

UAVs may also be useful for delineating river coastlines or measuring water levels. There exist many approaches for delineating waterlines, determining water extents or estimating water levels (Table 1). Satellite data has been used to extract water bodies based on transformation from RGB to HSV colour space [12], as well as to investigate changes in lake water extent [13]. Landsat images have been processed to automatically detect coastlines [14]. Unmanned ground vehicles (UGVs) [15] as well as unmanned surface vehicles (USVs) [16,17] may also be useful for water body and waterline detection. Also, it is possible to conduct hydrological measurements by means of simple and inexpensive devices. Smartphones have been used to determine shorelines [18], and water stages have been estimated using a low-cost camera [19,20,21]. In addition, UAVs have helped scientists to observe stages of rivers [7,22], water levels in dams [23], and to identify water body extent [24]. Among the above-mentioned papers, there are approaches that detect river coastlines based on RGB images. Not uncommonly, other sensors [24] or equipment [19] were found to be necessary to support such a detection. There is shortage of papers on delineating river coastline solely based on RGB UAV-acquired imagery, particularly targeted at facilitating computations through the reduction in dimensions of colour space.
The concept of our approach resides in projecting the close-range RGB aerial imagery into the hue, saturation, value (HSV) colour space, and consequently in limiting the scrutiny to the V component (reducing the model dimension and, thus, facilitating computations). There exist papers in which HSV was utilized for water detection, classification and segmentation. Namely, all three components (H, S and V) were analysed in [25] to detect clean and polluted water images. Yu et al. [26] used H to detect the shadow area of the river surface. Also, the S component was used in [27], where the flooded area in a city environment was detected, based on crowdsourced images. In our work, we used the V component, which was selected on a basis of the study by Rankin and Matthies [15], who found that it reveals small variability for water. Unlike other studies, we do not classify imagery at the pixel scale, but we carry out statistical inference instead. Indeed, we consider the probability distribution of V, particularly looking at kurtosis and the variability around the modal value, leading to a classification of grids of size 250 × 250 px into ones that include river coastline or not. The gridwise approach is, along with the reduction of the colour space, yet another feature that facilitates our waterline detection method (also known hereinafter as a detector). In other words, the strength of our method resides in its simplicity, namely that the waterline can be rapidly reconstructed in a rough way based on limited data (image parts corresponding to a superimposed grid lattice with information on V band).
Table 1. A review of papers considering waterline detection.
Table 1. A review of papers considering waterline detection.
No.ReferencePlatform 1|SensorMethodNote
1Rankin and Matthies (2010) [15]UGV|RGBThresholding, segmentationWater extent
2Namikawa et al. (2016) [12]S|RGBUnsupervised classification, thresholdingWater extent
3Niedzielski et al. (2016) [7]UAV|RGBManual digitization, statistical analysisWater stages
4Wei and Zhang (2016) [16]USV|RGBStructure extraction, texture analysis
5Deng et al. (2017) [13]S|RGB, NIR, SWIRRandom forestWater extent
6Kröhnert and Meichsner (2017) [18]P|RGBSpatio-temporal histogram analysis, segmentation
7Witherow et al. (2017) [27]LCC|RGBThreshold-based segmentationWater extent
8Zhan et al. (2017) [17]USV|RGBRANSAC algorithm
9Eltner et al. (2018) [19]LCC|RGBSpatio-temporal texture, seeded region growing, Canny edge detectionWater stages
10Kröhnert and Eltner (2018) [20]P, LCC|RGBSpatio-temporal histogram analysis, segmentation
11Ridolfi and Manciola (2018) [23]UAV|RGBPixel classification, Canny edge detectionA dam
12Tymków et al. (2019) [24]UAV|RGB, LiDAR, TIR, RGB+TIRSupervised classification, thresholding of pixel values, image transformsWater extent
13Viaña-Borja and Ortega-Sanchez (2019) [14]S|RGB, NIR, SWIRNew water index
14Bandini et al. (2020) [22]UAV|Radar, LiDAR, RGBAltimetric measurements, water level extraction of point cloudsWater stages
15Xue et al. (2020) [25]LCC|RGBEigen value analysis, gradient distributionsWater images classification
16Yu et al. (2020) [26]LCC|RGBImproved Local Binary PatternWater extent
17Eltner et al. (2021) [21]LCC|RGBNeural networks
1 LCC—low-cost camera, P—smartphone, S—satellite, UAV—unmanned aerial vehicle, UGV—unmanned ground vehicle, USV—unmanned surface vehicle.

2. Motivation

It is apparent from Table 1 that there exists a considerable number of methods for delineating waterline using sensors different than RGB. For instance, thermal or hyperspectral imagery are known to provide a better signal than RGB for extracting coastlines. However, the motivation for our work is inspired by the availability of drones equipped with RGB cameras. Statistically, they are most widely available throughout the world and, most importantly, RGB images are most widely analysed UAV-based data in scientific papers. Table 2 presents the use of RGB cameras on UAVs, and confirms the widest popularity of these sensors in the light of qualitative and quantitative characteristics.
Given the aforementioned superiority of RGB sensors in terms of their availability and widespread use, the coastline detection method presented in this paper may be an offer for users of UAVs equipped with visible light sensors.
It is worth noting that there is a research gap in rapid coastline mapping [33]. The simple (V band and coarse grid), and therefore rapid, approach outlined in this paper fits the problem in question.

3. Materials and Methods

3.1. UAV Data Acquisition and Preprocessing

To find out whether our approach is versatile, we used a set of 30 nadir UAV-acquired close-range photos, presenting different types of river channels (lowland and upland rivers as well as mountain streams). The aerial images were acquired at five different locations in Poland: (a) middle Odra river, (b) upper Odra river, (c) West Sudetes (Izera and Kwisa rivers), (d) Kłodzko County (Biała Lądecka, Bystrzyca Dusznicka, Nysa Kłodzka and Ścinawka rivers), and (e) lower Bug river (Figure 1).
Images were collected during different flight conditions. Two fixed-wing UAVs (eBee and swingletCAM by SenseFly), equipped with RGB cameras (Canon S110 and Canon IXUS 220HS, respectively), flying at altitudes from 75 m to 148 m above take-off (ATO) were used to capture photos. Ground sampling distance ranged from 2.67 cm/px to 5.27 cm/px. The extensive data collection period between 2012 and 2022 was taken into account. We chose these specific cameras to ensure data consistency over such a long period of image acquisition. In order to provide a broad range of environmental conditions, various morphological (wide and deep rivers as well as narrow and shallow streams) and hydrological (low and high stages as well as low and high discharges) features were considered during the image selection process. River characteristics, indicating the presence of diverse environment on analysed photos, are juxtaposed in Table 3. Minimum and maximum widths, as well as lengths, were measured on a basis of ortophotomaps, with resolutions ranging from 2.6 cm/px to 5.2 cm/px. Water stage and discharge data were acquired courtesy of the Institute of Meteorology and Water Management—National Research Institute (Instytut Meteorologii i Gospodarki Wodnej—Państwowy Instytut Badawczy, IMGW-PIB), except the discharge dataset for Izera, which was based on flow meter measurements. Natural rivers were selected for the study, without considerable human interventions within the channel (e.g., without groynes, retaining walls, rip-raps). In some images, the river banks were clearly visible, thus it should be easy for the detector to recognize the river coastline. In other situations under study, the banks were partially covered by vegetation, which may affect the detector efficiency (Table 4). Thirty JPG images processed in this paper are included in Supplementary Material so that the comparison with other methods for river coastline delineation is possible.
Images were processed in two stages (Figure 2). The first stage (preprocessing) consisted of three steps. In the first step, RGB photos were split into grids of size 250 × 250 px (hereinafter also known as slices), which was assumed to be the initial size. The choice of such a grid size was determined by image dimensions. Namely, each photo was divided into the integer number of 250 × 250 px grids, without bottom and right stripes. Moreover, such a grid size was earlier used in preprocessing RGB photographs in order to keep standard pixel intensity and to reduce computational time [34]. In this step, PIL library for Python and crop method were used. Subsequently, each slice was projected into the HSV colour space and a single band V was extracted, using skimage package and rgb2hsv method. Water bodies usually have uniform brightness, also called value [15], ranging from 0 to 1. Each pixel within the grid has its own value representing V. The next step was to convert values recorded in each grid from raster to numeric format, and to write them in a separate .csv file. In each file, one record corresponded to the V value in a single pixel. This step was conduced using the PIL library and getdata method. We also tested grids of size 125 × 125 px, to which the aforementioned three-step procedure was also applied.

3.2. Edge Detection

Although it is rather uncommon to detect river coastlines based solely on RGB imagery (i.e., without thermal or hyperspectral information), we attempted to roughly (at the lattice level) identify coastlines in visible-light UAV-acquired aerial images. We adopted the aforementioned concept of Rankin and Matthies [15], who noticed that water has uniform brightness in imagery, which corresponds to small variability of the V band in the HSV colour scale. Our approach is based on projecting RGB images into the HSV scale, and extracting the V band for further scrutiny. Hence, we do not seek specific colours which, in fact, may vary considerably for water (water can be expressed in RGB aerial imagery as a blue, green, brown, or even black feature). Instead of doing so, we consider the empirical probability distribution (histogram) of V, attempting to describe cases in which a histogram or its part corresponds to generation of pixels that do not vary considerably in terms of V.
In our approach, the key distinction is based on a number of modes a given histogram reveals. The following situations may occur: (1) unimodal distribution of V, and (2) multimodal distribution of V. To test for unimodality against multimodality we use the method of Silverman [35] and its R implementation in the modetest function with 100 bootstrap samples.
In the first case (point 1 above), we probably face land cover of uniform or similar characteristics over the entire image grid (Figure 3). According to Rankin and Matthies [15], water reveals a small variability of V, therefore leptokurtic histograms are likely to correspond to water bodies. Mesokurtic or platykurtic histograms do not represent a small variability of V, thus they are not considered as representing water. Although there are statistical tests for kurtosis (e.g., the Anscombe–Glynn test), they mainly help to identify statistically significant departures from mesokurtisity and do not measure the strength of such departures. Therefore, to detect if a given histogram is leptokurtic, we adopted the commonly known criterion that meaningfully non-normal distributions occur when kurtosis exceeds 7 [36,37], after [38]. Note that Curran et al. [36] call the conditions with kurtosis of 7 (and skewness of 2) as moderately departing from multivariate normality, and they also claim that “[…] it seems clear that obtained univariate values approaching at least 2.0 and 7.0 for skewness and kurtoses are suspect”. Also, Kim [38] referred to West et al. [37], and claimed that kurtosis greater than 7 indicates non-normality, with positive excess kurtosis being leptokurtic. Although kurtosis is a good measure of peakedness around the mean, in the case of asymmetric distributions, the mode can depart from mean and the additional measure of concentration around the mode is needed. Therefore, we produced the additional simple empirical test for measuring concentration around mode, known hereinafter as the concentration test. It considers the interval ranging from “mode minus 25% of standard deviation” to “mode plus 25% of standard deviation”, and calculates the percentage of values falling into it. We assume that there exists concentration of values around the mode if the above-mentioned percentage is considerably greater than 19.74% (percentage probability that values fall between–0.25 and 0.25 quantiles of the standard normal distribution), plus some empirical excess value δ responsible for a considerable excess from 19.74%. In other words, the concentration test checks if the underlying empirical distribution is narrower around the mode than the standard normal distribution.
In the second case (point 2 above), more than two modes in the empirical probability distribution of V correspond to variable land cover within an image grid, i.e., a number of modes is likely to correspond to a number of prevailing (dominant) land cover types (Figure 4). In such cases, we face a mixture of probability distributions (histograms). In order to check if a given multi-modal histogram may consist of a signal of water, we again apply the above-mentioned concentration test. If the V values of pixels are grouped around the most frequently occurring mode (narrow V band), it may indicate water. Note that kurtosis is not used in the multi-modal case, since the mixture of distribution produces spread around the overall mean (not corresponding to a specific component of a histogram) and, therefore, kurtosis does not measure how values are gathered near modes.
A given 250 × 250 px grid receives flag 1 (water coastline detected) if:
  • In the case of one mode (situation 1 above), kurtosis is greater than 7 and concentration of values around the mode is detected using the concentration test with δ = 0.15;
  • In the case of several modes (situation 2 above), the concentration test with δ = 0.1 suggests a considerable concentration of values around the most frequently occurring mode.
A summary of the edge detection process is presented in Figure 5.

3.3. Accuracy Assessment

Since our approach is solely based on RGB aerial images, the problem of detecting river coastlines occurs when vegetation or shadows prevent the view. This concerns both the automated detection of coastline and human vectorization of coastline, the latter being used for validation. Figure 6 presents examples of such problems, which—in case of vegetation overhanging the channel—may be driven by a nadir position of camera.
In order to validate the automatically detected coastline, reference data were manually produced by assessing each image grid separately and assigning the following flags into each grid: 0 (no coastline) or 1 (coastline). As a result, thirty .csv files (one file for each image), in which one record corresponds to one slice, were produced. Subsequently, we compared the “waterline/no-waterline” grids indicated by the detector with the corresponding grids evaluated visually.
Accuracy assessment was carried out with the use of two statistics, detection rate (DR) and false hit rate (FHR), calculated as follows:
D R = T P T P + F N 100 % ,
F H R = F P T P + F P 100 % ,
where TP—true positive (both detector and human analyst assessed a grid as coastline); FN—false negative (detector indicated a grid as no coastline but human analyst interpreted it as coastline); and FP—false positive (detector indicated a grid as coastline but human analyst evaluated it as no coastline). The DR and FHR accuracy indices are also known in the literature as the producer accuracy and error of commission, respectively.

3.4. Calibration of Detector

In order to check which set of parameters works best while detecting river coastlines, we tested nine scenarios (S1–S9) in which we took into account three variables: kurtosis threshold, concentration around mode (determined by the empirical value δ ), and the logical relationship between them (Table 5).
The experiment was conducted on the basis of seven images covering rivers and small lakes (Table 6). The aim of the calibration was to achieve the most considerable decrease in false hit rate (FHR) and, concurrently, the lowest possible decrease in detection rate (DR). We assumed that DR should be at least 60%. We calculated the differences of DR and FHR with respect to the original S1 scenario for each image. Then, we checked which scenario returned the best values in comparison with S1. Consequently, 30 images juxtaposed in Table 4 were processed using parameters adopted from S5.

4. Results

4.1. Detection Performance Statistics

Table 7 presents the detection performance statistics. Two threshold values for detection rate (DR) were assumed: (1) 50% and (2) 60%. Subsequently, we checked how many cases (expressed in percentage) were found above these values. Considering 250 × 250 px grids, threshold (1) was exceeded for 67% of all 30 images (80% for wide and 40% for narrow rivers). Higher threshold (2) was exceeded for 47% of all images (55% for wide and 30% for narrow rivers). Detection skills were therefore better for wide rivers. As DR is not a standalone measure for detection performance, we also calculated FHR and found that, for cases that exceeded threshold (1), FHR varied between 5.00% and 61.36%, with mean of 36.63%. In order to check the impact of grid size on the detection skills, a division into smaller slices (125 × 125 px) was carried out. However, there was no considerable improvement in the detection performance. Higher DR was obtained only for eight out of thirty cases: five for narrow and three for wide rivers. Simultaneously, an increase in DR and a decrease in FHR was achieved for 2 out of 30 cases (images Biała_1 and Odra_3). Thus, the bigger grid size was chosen for further analysis. All thirty analysed images presenting results visually are included in Supplementary Material.

4.2. Image-Specific Observations

The highest detection rate (DR) was achieved for Bystrzyca_2 (Figure 7a), which presents a narrow channel of Bystrzyca Dusznicka. The image was taken in winter when sky was overcast. It resulted in colours that were very similar to grey scale. Uniform, black water, which was distinguished from the white neighbourhood, resulted in high DR (92%). However, there also occurred false positive hits (red squares) driven by other black features, like trees protruding from snow cover. For images presenting narrow, natural rivers without regulation structures, and with the occurrence of shadows and reflections from water surface, the detection performance was found to be much worse. In those situations, there were few false positives, but DR was low and the detector did not resolve real coastlines; that is, there were a considerable number of false negatives (yellow squares). In contrast, the waterline detection in Kwisa (Figure 7b) ended up with a lack of false positive hits (FHR was equal to 0%); however, the DR was as low as 22%, possibly due to an extremely high error of omission (above 78%). The reasons for low accuracy in such cases were: sunlight reflection from the water surface and visible submerged riverbed forms. The detection performance was also affected by the river neighbourhood. Roads, buildings, and regularly shaped crop fields—characterized by homogeneous colours and being located close to the river channel—formed local disturbances responsible for increasing the FHR (Biała_1, Figure 7c). The detector treated the boundaries of roads, buildings and crop fields as coastlines, therefore a lot of false positive indications (red squares) resulted in the highest FHR.

4.3. Visual Assessment

In addition to the accuracy assessment, the images can also be evaluated visually to observe whether true positives tend to create linear elements, which may indicate the approximate course of the river coastline. In Odra_8, it could be noticed that green squares (TP) showed the course of the river coastline well (Figure 8a). On the other hand, there were a few yellow squares (FN) that made the detection rate (DR) lower. A similar situation was presented in Odra_4. Although there was only one red square (FP), which led to one of the lowest false hit rates (FHRs), the occurrence of FNs also decreased the DR (Figure 8b). The detection results for Bystrzyca_4 seemed to be good. However, Bystrzyca Dusznicka is a narrow river, and therefore the green grids were too big and touched each other (Figure 8c). Indicating one or two separate river coastlines was impossible, so the detection performance was not satisfactory in this case. Visually the worst detection occurred for Biała_1 (Figure 8d). There were a lot of FPs, which resulted in a high FHR, and a lot of FNs, which led to a small DR. The stream in this picture was shallow and some riverbed forms were visible because of high transparency and small depth of water. The detector did not recognize water as a surface with uniform colours, and was not able to resolve the coastline.

4.4. Factors Affecting Detection

The coverage water class (Table 4, including its definition in a footnote) plays an important role, since there exists a visible relationship between features recorded in an image and detection rate (DR). For lower class (1,2), which means that small part of the entire image is covered by water, the DR is low. For higher class (3,4), the DR is high. Moreover, in the cases presented in this paper, the coverage water class refers to width of a river. The lower class occurs in case of narrow rivers and streams, while the higher class corresponds to wide rivers. For most of the analysed images, the detection performance is better for wide rivers than for narrow rivers.
In cases with easily detectable coastline, for instance when banks are clearly visible as a result of regulation works and the presence of retaining walls (e.g., Bystrzyca_2 and Bystrzyca_4), the detector works well. The performance can also be improved when contrast between water and its surrounding is enhanced, for instance due to snow cover in the vicinity of a channel. The skills of the coastline detector deteriorates when river becomes narrow. Indeed, one or two separate coastlines of a narrow river cannot be resolved, either with 250 × 250 px or 125 × 125 px grids (Table 7).
Visibility of the river coastline through vegetation is yet another factor that affects the detector performance. When river banks are visible, not covered by vegetation, the detector works better and, in most cases, the DR is high. The presence of trees or scrubs above the river banks is the reason why the detected coastline is shifted from the real coastline position. According to Figure 6 and the description in Section 3.3, the detector resolves a boundary between water and “no-water" fragments within the image, therefore it cannot delineate real coastline hidden by vegetation.
The occurrence of uniformly coloured human-made structures, such as roads and buildings, may cause the detector to misinterpret such places as water and, as a consequence, produce a boundary between them and their neighbourhood. Detecting edges that are not associated with water may lead to additional errors (false positives) to the decrease in false hit rate (FHR). Also, local structures associated with one land unit could lead to false positive hits.
We identified a few factors controlling the performance of the elaborated waterline detection method. Below, they are grouped according to false responses (FP/FN).
  • Factors which lead to the incorrect detection of river coastlines in places where there are no coastlines (FP):
    The occurrence of linear objects and objects with uniform colours (like roads, buildings) in the vicinity of a river (Figure 7c and Figure 8d). The detector interprets these areas as water due to the presence of edges. For the detector, there is no difference between a water edge and the edge of a different uniformly coloured object.
    The presence of continuous and homogeneous snow surface in a photo. If there exist visible natural or artificial objects (such as scrubs, crop field without snow, trees) with colours different than snow and also revealing a linear character, the detector indicates borderline at places other than coastline (Figure 7a).
    Vegetation protruding from water. There is only one false positive in Figure 8b, where an edge was detected between the water surface and vegetation protruding from the water.
  • Factors which lead to omitting river banks in places where there are coastlines (FN):
    There occurred cases where the river coastline was clearly visible in the imagery, but the detector did not resolve it, leading to the occurrence of FNs. When river is shallow, underwater bedforms may be visible, thus limiting the waterline detection performance (Figure 8c,d).
    False negative indications appear where only a small part of river coastline falls within the grid (Figure 7 and Figure 8).
    When the edge of a grid overlaps with the real coastline (but any part of river falls into the grid), our method does not resolve the coastline in question (Figure 7c and Figure 8d). In order to reduce this type of error, shifted grids may be employed [39].

5. Discussion

In the literature, there exist several approaches to delineate river coastline or estimate water stages. However, usually, additional survey devices, sensors, or data are necessary, e.g., a total station, a GNSS receiver, or extra spectral bands other than RGB. The advantage of our approach is that we use only RGB images acquired by UAVs (no other equipment or data are required), and also that we reduce the image colour dimension from 3 to 1 in order to work on the V component (which is possibly faster and a simpler computation). Focusing on papers which uses only RGB images, their advantages, in general, are revealed as simple, cheap and fast data acquisition. On the other hand, they are sensitive to water surface disturbances (e.g., reflections, waves, shadows). In our approach, we also encountered these constraints, which affected detection accuracy. Table 8 presents advantages and disadvantages of different waterline detection methods that use mainly RGB images.
Ridolfi and Manciola [23], as well as Tymków et al. [24], captured RGB images by means of UAVs, and the collected material was subsequently processed to obtain coastline or water body extent.
In [23], the pixel-based classification was carried out and, in addition, the edge detection procedure was employed in accordance with the Canny method [40] to delineate coastline. On the contrary, our method uses a coarse lattice (grids of size 250 × 250 pixels) as an indivisible domain, which is a conceptually different approach from the one described in [23]. The similarity resides in that both methods aim at edge detection as a tool for delineating waterlines. However, the applied edge detection methods are unlike each other—Ridolfi and Manciola [23] consider local maximum values of a gradient of a grey scale, while in this study, we explore empirical probability distributions (histograms) of V bands extracted from the HSV colour space. Also, unlike our study, GCPs were required in the approach of Ridolfi and Manciola [23].
Tymków et al. [24] claimed that “On average, the best was obtained for combined, four-bands ortho mosaic created from RGB and TIR images”, so the additional sensor (thermal camera) is necessary to obtain the best results. Our method is conceptually different, since it accepts the compromise between coastline detection performance and universality. Namely, the universality in question is associated with access to RGB cameras (most drones have them onboard) which—if used separately from other sensors and devices—may offer sufficient data to very accurately reconstruct coastlines (but lower accuracy is attained).
As stated above, we reduced the dimension of colour space of aerial images (RGB → HSV → V) following the finding of Rankin and Matthies [15] that water has uniform brightness, and therefore V remains stable when representing water. Similarly, Namikawa et al. [12] detected the water extent in RGB satellite images by transferring them into HSV and reducing the dimension by extracting hue (the H component in the HSV colour space), which differs from our approach. The concept was adopted from [41], where the H component was found to enable good visual separability between water and other features, and the separability in question was found to be even better than for Normalized Difference Water Index (NDWI) using either near-infrared or shortwave-infrared.
It is also worth comparing the waterline detector presented in this study with the coastline detection method elaborated by Wei and Zhang [16] to be applied in an unmanned surface vehicle (USV). Such vehicles flow on water surface and, therefore, the photos they take are not nadir images, but are horizontal or oblique. This forms a different view, since reflections from the water are different, the terrain is visible from a different perspective, and most images observe the sky as well. In [16], the structure and texture analysis of photographs was employed, and the mean error was 1.84 px. Since our detector works on a coarse lattice of 250 × 250 px, a quantitative comparison with the method of Wei and Zhang [16] is impossible. However, a visual analysis of case studies indicates similar constraints: both approaches are sensitive to shadows, depend on illumination, and are vulnerable to the presence of vegetation. Similarly to [16], a non-nadir camera pose was studied by Kröhnert and Meichsner [18], who detected coastlines in close-range photographs taken by smartphones. Visually, waterlines were resolved well in [18]; however, non-nadir frames presented waterlines located close to lens (5–35 m), which makes the results difficult to compare with our findings.
Eltner et al. [19,21] focused on a similar problem from a different perspective. Namely, these authors mounted a low-cost camera locally onto a lantern pole, bridge or tree. The camera captured several RGB images in certain time intervals, and time-lapse image sequences were created. In our case, UAVs moved and photos were taken with spatial overlap, thus their centres of projection were different. In general, during a photogrammetric flight, there is no possibility to acquire image sequences in the same, static location. Thus, our data acquisition procedure differs from the approach proposed by Eltner et al. [19,21].
Niedzielski et al. [7] utilized RGB aerial images to detect significant changes in water levels on the basis of variations in water surface areas. One form of input data was the water body extent determined by time-consuming manual digitization. Our method, as well as other RGB-based approaches outlined above, may serve the purpose of determining such extents, which may be a step towards making the method described in [7] unsupervised.

6. Conclusions

In this paper, a histogram-based method for delineating river coastlines in UAV-acquired nadir RGB images was presented. Unlike other methods, which resolve waterlines on the basis of RGB imagery, our approach operates on a coarse lattice superimposed on a photo (grid size of 250 × 250 px). The reason behind this is twofold. Firstly, the detector analyses a probability distribution of the V component in the HSV colour space, which is doable when the variability of V is available (not single values, but samples extracted from many grids). Secondly, the gridwise (and thus spatially coarse) domain facilitates computations, which is important when coastline should be rapidly resolved during floods. The computation time is also saved due to the reduction in the dimension of input data, attained through projecting photos from RGB to HSV and, subsequently, selecting V as a band, which is said to be stable for water.
Our approach was tested and validated on a diversified set of 30 nadir aerial images collected by UAV-mounted cameras at 14 sites along nine rivers in Poland, revealing dissimilar fluvial characteristics (Table 3). We identified several factors limiting the method: (1) the river width, (2) the land cover, (3) shadows, (4) vegetation, and (5) the characteristics of the water. Firstly, we found that the performance of our detector was better for wide rivers than for narrow channels. This may refer to the fact that the detector is efficient when water covers the majority of an image area. This means that the water coverage class (Table 4) plays an important role in controlling the detection performance. Secondly, land cover (more precisely, the homogeneity of its colour) in the vicinity of a river is also a significant factor that influences the performance in question. When the image includes objects with heterogeneous colours, thus revealing a heterogeneous V value in the HSV colour model, there exist favourable conditions to delineate river coastlines. On the other hand, if there are uniformly coloured objects, like roads or buildings, the detector may incorrectly classify such areas as coastlines. Moreover, shadows and vegetation were found to impede waterline determination. As is presented in Figure 5, shadows cast by trees are likely to be incorrectly recognized as a coastline and, consequently, classified as a false positive indication. Also, the characteristics of water itself were found to impact the performance of detection skills. Our method works well primarily for water of uniform colours. Such a feature is present when a river is wide and deep (no or limited signal from underwater bedforms).
Despite the above-mentioned limitations, our approach provides quick, cheap, and rough detection of waterlines. It may be utilized to analyse the diversified water environment, especially where water is deep. Detecting river or lake coastlines, as well as monitoring their changes, are potential applications of the method presented in this paper. It is solely based on RGB imagery, and both satellite and low-altitude photos can be used as input data.
Further studies should focus on migrating from coarse grids to lines, and correct detection errors. To increase the detection accuracy, spatial filtering could be applied. Namely, based on the analysis of neighbouring grids, errors could be eliminated. If we assume that the river coastline is linearly continuous, isolated positives are artefacts, and should be changed to negatives, and isolated negatives may become positives, for example through the process of interpolation.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs16142565/s1.

Author Contributions

Conceptualization, G.W., M.W. and T.N.; methodology, T.N., G.W. and M.W.; software, G.W., M.W. and T.N.; validation, G.W. and M.W.; formal analysis, G.W., M.W. and T.N.; writing—original draft preparation, T.N., G.W. and M.W.; writing—review and editing, T.N., M.W. and G.W.; visualization, G.W. and M.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Science Centre grant number 2020/38/E/ST10/00295.

Data Availability Statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
UAVUnmanned aerial vehicle
RGBRed, green, blue
HSVHue saturation value
UGVUnmanned ground vehicle
USVUnmanned surface vehicle
ATOAbove take-off
DRDetection rate
FHRFalse hit rate
TPTrue positive
FNFalse negative
FPFalse positive

References

  1. Ledger, M.E.; Milner, A.M. Extreme events in running waters. Freshw. Biol. 2015, 60, 2455–2460. [Google Scholar] [CrossRef]
  2. Raymond, C.; Horton, R.M.; Zscheischler, J.; Martius, O.; AghaKouchak, A.; Balch, J.; Bowen, S.G.; Camargo, S.J.; Hess, J.; Kornhuber, K.; et al. Understanding and managing connected extreme events. Nat. Clim. Chang. 2020, 10, 611–621. [Google Scholar] [CrossRef]
  3. Kundzewicz, Z.W.; Pińskwar, I.; Brakenridge, G.R. Changes in river flood hazard in Europe: A review. Hydrol. Res. 2018, 49, 294–302. [Google Scholar] [CrossRef]
  4. Flener, C.; Vaaja, M.; Jaakkola, A.; Krooks, A.; Kaartinen, H.; Kukko, A.; Kasvi, E.; Hyyppä, H.; Hyyppä, J.; Alho, P. Seamless Mapping of River Channels at High Resolution Using Mobile LiDAR and UAV-Photography. Remote Sens. 2013, 5, 6382–6407. [Google Scholar] [CrossRef]
  5. Tamminga, A.D.; Eaton, B.C.; Hugenholtz, C.H. UAS-based remote sensing of fluvial change following an extreme flood event. Earth Surf. Process. Landforms 2015, 40, 1464–1476. [Google Scholar] [CrossRef]
  6. Langhammer, J.; Vacková, T. Detection and Mapping of the Geomorphic Effects of Flooding Using UAV Photogrammetry. Pure Appl. Geophys. 2018, 175, 3223–3245. [Google Scholar] [CrossRef]
  7. Niedzielski, T.; Witek, M.; Spallek, W. Observing river stages using unmanned aerial vehicles. Hydrol. Earth Syst. Sci. 2016, 20, 3193–3205. [Google Scholar] [CrossRef]
  8. Hemmelder, S.; Marra, W.; Markies, H.; De Jong, S.M. Monitoring river morphology & bank erosion using UAV imagery—A case study of the river Buëch, Hautes-Alpes, France. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 428–437. [Google Scholar]
  9. Tsatsaris, A.; Kalogeropoulos, K.; Stathopoulos, N.; Louka, P.; Tsanakas, K.; Tsesmelis, D.E.; Krassanakis, V.; Petropoulos, G.P.; Pappas, V.; Chalkias, C. Geoinformation Technologies in Support of Environmental Hazards Monitoring under Climate Change: An Extensive Review. Isprs Int. J.-Geo-Inf. 2021, 10, 94. [Google Scholar] [CrossRef]
  10. Karamuz, E.; Romanowicz, R.J.; Doroszkiewicz, J. The use of unmanned aerial vehicles in flood hazard assessment. J. Flood Risk Manag. 2020, 13, e12622. [Google Scholar] [CrossRef]
  11. Ibrahim, N.S.; Sharun, S.M.; Osman, M.K.; Mohamed, S.B.; Abdullah, S.H.Y.S. The application of UAV images in flood detection using image segmentation techniques. Indones. J. Electr. Eng. Comput. Sci. 2021, 23, 1219–1226. [Google Scholar] [CrossRef]
  12. Namikawa, L.M.; Körting, T.S.; Castejon, E.F. Water body extraction from RapidEye images: An automated methodology based on Hue component of color transformation from RGB to HSV model. Braz. J. Cartogr. 2016, 68, 1097–1111. [Google Scholar] [CrossRef]
  13. Deng, Y.; Jiang, W.; Tang, Z.; Li, J.; Lv, J.; Chen, Z.; Jia, K. Spatio-Temporal Change of Lake Water Extent in Wuhan Urban Agglomeration Based on Landsat Images from 1987 to 2015. Remote Sens. 2017, 9, 270. [Google Scholar] [CrossRef]
  14. Viaña-Borja, S.P.; Ortega-Sánchez, M. Automatic Methodology to Detect the Coastline from Landsat Images with a New Water Index Assessed on Three Different Spanish Mediterranean Deltas. Remote Sens. 2019, 11, 2186. [Google Scholar] [CrossRef]
  15. Rankin, A.; Matthies, L. Daytime Water Detection Based on Color Variation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010. [Google Scholar]
  16. Wei, Y.; Zhang, Y. Effective Waterline Detection of Unmanned Surface Vehicles Based on Optical Images. Sensors 2016, 16, 1590. [Google Scholar] [CrossRef] [PubMed]
  17. Zhan, W.; Xiao, C.; Yuan, H.; Wen, Y. Effective Waterline Detection for Unmanned Surface Vehicles in Inland Water. In Proceedings of the Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), Montreal, QC, Canada, 28 November–1 December 2017. [Google Scholar]
  18. Kröhnert, M.; Meichsner, R. Segmentation of Environmental Time Lapse Image Sequences for the Determination of Shore Lines Captured by Hand-Held Smartphone Cameras. ISPRS Ann. Photogramm. Remote. Sens. Spat. Inf. Sci. 2017, IV-2/W4, 1–8. [Google Scholar] [CrossRef]
  19. Eltner, A.; Elias, M.; Sardemann, H.; Spieler, D. Automatic Image-Based Water Stage Measurement for Long-Term Observations in Ungauged Catchments. Water Resour. Res. 2018, 54, 362–371. [Google Scholar] [CrossRef]
  20. Kröhnert, M.; Eltner, A. Versatile mobile and stationary low-cost approaches for hydrological measurements. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2018, XLII-2, 543–550. [Google Scholar] [CrossRef]
  21. Eltner, A.; Bressan, P.O.; Akiyama, T.; Gonçalves, W.N.; Marcato, J., Jr. Using Deep Learning for Automatic Water Stage Measurements. Water Resour. Res. 2021, 57, e2020WR027608. [Google Scholar] [CrossRef]
  22. Bandini, F.; Sunding, T.P.; Linde, J.; Smith, O.; Jensen, I.K.; Köppl, C.J.; Butts, M.; Bauer-Gottwein, P. Unmanned Aerial System (UAS) observations of water surface elevation in a small stream: Comparison of radar altimetry, LIDAR and photogrammetry techniques. Remote Sens. Environ. 2020, 237, 111487. [Google Scholar] [CrossRef]
  23. Ridolfi, E.; Manciola, P. Water Level Measurements from Drones: A Pilot Case Study at a Dam Site. Water 2018, 10, 297. [Google Scholar] [CrossRef]
  24. Tymków, P.; Jóźków, G.; Walicka, A.; Karpina, M.; Borkowski, A. Identification of Water Body Extent Based on Remote Sensing Data Collected with Unmanned Aerial Vehicle. Water 2019, 11, 338. [Google Scholar] [CrossRef]
  25. Xue, M.; Shivakumara, P.; Wu, X.; Lu, T.; Pal, U.; Blumenstein, M.; Lopresti, D. Deep invariant texture features for water image classification. Appl. Sci. 2020, 2, 2068. [Google Scholar] [CrossRef]
  26. Yu, J.; Lin, Y.; Zhu, Y.; Xu, W.; Hou, D.; Huang, P.; Zhang, G. Segmentation of River Scenes Based on Water Surface Reflection Mechanism. Appl. Sci. 2020, 10, 2471. [Google Scholar] [CrossRef]
  27. Witherow, M.A.; Elbakary, M.I.; Iftekharuddin, K.M.; Cetin, M. Analysis of Crowdsourced Images for Flooding Detection. In Proceedings of the VI ECCOMAS Thematic Conference on Computational Vision and Medical Image Processing, Porto, Portugal, 18–20 October 2017. [Google Scholar]
  28. Kucharczyk, M.; Hugenholtz, C.H. Remote sensing of natural hazard-related disasters with small drones: Global trends, biases, and research opportunities. Remote Sens. Environ. 2021, 264, 112577. [Google Scholar] [CrossRef]
  29. Zhang, Z.; Zhu, L. A Review on Unmanned Aerial Vehicle Remote Sensing: Platforms, Sensors, Data Processing Methods, and Applications. Drones 2023, 7, 398. [Google Scholar] [CrossRef]
  30. Rao, B.; Gopi, A.G.; Maione, R. The societal impact of commercial drones. Technol. Soc. 2016, 45, 83–90. [Google Scholar] [CrossRef]
  31. Aydin, B. Public acceptance of drones: Knowledge, attitudes, and practice. Technol. Soc. 2019, 59, 101180. [Google Scholar] [CrossRef]
  32. Maddikunta, P.K.R.; Hakak, S.; Alazab, M.; Bhattacharya, S.; Gadekallu, T.R.; Khan, W.Z.; Pham, Q.V. Unmanned aerial vehicles in smart agriculture: Applications, requirements, and challenges. IEEE Sens. J. 2021, 21, 17608–17619. [Google Scholar] [CrossRef]
  33. Sun, W.; Chen, C.; Liu, W.; Yang, G.; Meng, X.; Wang, L.; Ren, K. Coastline extraction using remote sensing: A review. Giscience Remote. Sens. 2023, 60, 2243671. [Google Scholar] [CrossRef]
  34. Abdullah, S.L.S.; Hambali, H.A.; Jamil, N. Segmentation of Natural Images Using an Improved Thresholding-Based Technique. Procedia Eng. 2012, 41, 938–944. [Google Scholar] [CrossRef]
  35. Silverman, B.W. Using kernel density estimates to investigate multimodality. J. R. Stat. Soc. Ser. B 1981, 43, 97–99. [Google Scholar] [CrossRef]
  36. Curran, P.J.; West, S.G.; Finch, J.F. The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychol. Methods 1996, 1, 16–29. [Google Scholar] [CrossRef]
  37. West, S.G.; Finch, J.F.; Curran, P.J. Structural equation models with nonnormal variables: Problems and remedies. In Structural Equation Modeling: Concepts, Issues, and Applications; Hoyle, R.H., Ed.; Sage: New York, NY, USA, 1995; pp. 56–75. [Google Scholar]
  38. Kim, H.Y. Statistical notes for clinical researchers: Assessing normal distribution (2) using skewness and kurtosis. Restor. Dent. Endod. 2013, 38, 53–54. [Google Scholar] [CrossRef] [PubMed]
  39. Niedzielski, T.; Jurecka, M.; Stec, M.; Wieczorek, M.; Miziński, B. The nested k-meansmethod: A new approach for detecting lost persons in aerial images acquired by unmanned aerial vehicles. J. Field Robot. 2017, 34, 1395–1406. [Google Scholar] [CrossRef]
  40. Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1987, 6, 184–203. [Google Scholar]
  41. Namikawa, L.M. Imagens landsat 8 para monitoramento de volume de água em reservatórios: Estudo de caso nas barragens Jaguari e Jacareí do Sistema Cantareira. SimpóSio Bras. Sensoriamento Remoto 2015, 17, 4828–4835. [Google Scholar]
Figure 1. Study area: middle Odra river (a), upper Odra river (b), Izera and Kwisa rivers in West Sudetes (c), Biała Lądecka, Bystrzyca Dusznicka, Nysa Kłodzka and Ścinawka rivers in Kłodzko County (d), lower Bug river (e).
Figure 1. Study area: middle Odra river (a), upper Odra river (b), Izera and Kwisa rivers in West Sudetes (c), Biała Lądecka, Bystrzyca Dusznicka, Nysa Kłodzka and Ścinawka rivers in Kłodzko County (d), lower Bug river (e).
Remotesensing 16 02565 g001
Figure 2. Workflow (abbreviations for DR and FHR are defined in Section 3.3).
Figure 2. Workflow (abbreviations for DR and FHR are defined in Section 3.3).
Remotesensing 16 02565 g002
Figure 3. Histogram examples for unimodal distribution of V: (a) TP—detector and expert indicated coastline, (b) FP—only detector indicated coastline, (c) TN—detector and expert indicated no coastline, (d) FN—only expert indicated coastline (abbreviations TP, FP, TN and FN are defined in Section 3.3).
Figure 3. Histogram examples for unimodal distribution of V: (a) TP—detector and expert indicated coastline, (b) FP—only detector indicated coastline, (c) TN—detector and expert indicated no coastline, (d) FN—only expert indicated coastline (abbreviations TP, FP, TN and FN are defined in Section 3.3).
Remotesensing 16 02565 g003
Figure 4. Histogram examples for multimodal distribution of V: (a) TP—detector and expert indicated coastline, (b) FP—only detector indicated coastline, (c) TN—detector and expert indicated no coastline, (d) FN—only expert indicated coastline (abbreviations TP, FP, TN and FN are defined in Section 3.3).
Figure 4. Histogram examples for multimodal distribution of V: (a) TP—detector and expert indicated coastline, (b) FP—only detector indicated coastline, (c) TN—detector and expert indicated no coastline, (d) FN—only expert indicated coastline (abbreviations TP, FP, TN and FN are defined in Section 3.3).
Remotesensing 16 02565 g004
Figure 5. Edge detection process (abbreviations TP, FP, TN and FN are defined in Section 3.3).
Figure 5. Edge detection process (abbreviations TP, FP, TN and FN are defined in Section 3.3).
Remotesensing 16 02565 g005
Figure 6. Different levels of complexity of waterline detection: visible and obvious coastline (a), coastline hidden by trees (b), shadows cast by trees (c), coastline difficult to delineate (d). Blue line—real coastline; yellow line—coastline delineated by human analyst; green line—expected coastline delineated by the detector.
Figure 6. Different levels of complexity of waterline detection: visible and obvious coastline (a), coastline hidden by trees (b), shadows cast by trees (c), coastline difficult to delineate (d). Blue line—real coastline; yellow line—coastline delineated by human analyst; green line—expected coastline delineated by the detector.
Remotesensing 16 02565 g006
Figure 7. The best and worst performance of waterline detection according to accuracy assessment measures: the highest DR (a), the lowest DR (as well as the lowest FHR) (b), the highest FHR (c).
Figure 7. The best and worst performance of waterline detection according to accuracy assessment measures: the highest DR (a), the lowest DR (as well as the lowest FHR) (b), the highest FHR (c).
Remotesensing 16 02565 g007
Figure 8. Visual assessment of detector performance.
Figure 8. Visual assessment of detector performance.
Remotesensing 16 02565 g008
Table 2. Popularity of RGB-based UAV platforms in scientific and business activities.
Table 2. Popularity of RGB-based UAV platforms in scientific and business activities.
ReferenceLocation in TextInformation on RGB
Kucharczyk and Hugenholtz [28]Table 5 in [28]86.77% articles (from among 635 papers) use RGB sensors
Zhang and Zhu [29]SubSection 2.2.1 in [29]RGB cameras are prevailing type of remote sensing sensor mounted onboard UAVs
Rao et al. [30]SubSection 2.2 in [30]Aerial photography is the most common usage of consumer and hobbyist UAVs
Aydin [31]Section Background in [31]Based on Federal Aviation Administration Aerospace Report 2017, 3.55–4.47 million hobbyist drones were predicted to 2021
(following [30], hobbyist drones are used mainly for
photography—see previous row in this Table)
Maddikunta et al. [32]Paragraph II C 4) (a) in [32]RGB cameras are the most popular sensors in precision agriculture and smart agro applications
Table 3. River characteristics at dates when specific images were acquired. Mean water level corresponds to an average value of water levels recorded over the hydrological year (in Poland, this lasts from 1 November to 31 October) in which a given photo was taken. The hydrological characteristics juxtaposed in this table refer to the gauge stations located closest to the image locations.
Table 3. River characteristics at dates when specific images were acquired. Mean water level corresponds to an average value of water levels recorded over the hydrological year (in Poland, this lasts from 1 November to 31 October) in which a given photo was taken. The hydrological characteristics juxtaposed in this table refer to the gauge stations located closest to the image locations.
No.Image NameType 1Min. Width 2 [m]Max. Width [m]Length [m]Water Level [cm]Mean Water Level [cm]Discharge [ m 3 s ]
1Bug_1lowland8398165240 b30249.30
2Bug_2lowland90100140230 b29642.20
3Bug_3lowland8090150230 b29642.20
4Biała_1upland111611542 b552.78
5Biała_2upland111613067 a524.04
6Bystrzyca_1upland0916519191.88
7Bystrzyca_2upland91120019191.88
8Bystrzyca_3upland6111754 b190.91
9Bystrzyca_4upland11011512 b191.44
10Bystrzyca_5upland711225 190 a * 1215.67
11Izeramountain01195No dataNo data0.15
12Kwisamountain720200107 b115No data
13Nysa_1upland233813598 a9620.50
14Nysa_2upland52518089 b1014.37
15Nysa_3upland52519088 b1125.08
16Nysa_4upland193413580 b9610.50
17Odra_1lowlandNANANA165 b178105.00
18Odra_2lowland75130125133 b17877.10
19Odra_3lowland7078115167 b245107.00
20Odra_4lowlandNANANA175 b178115.00
21Odra_5lowland93100145230 b232No data
22Odra_6lowland115120165230 b232No data
23Odra_7lowland90105150229 b232No data
24Odra_8lowland95125160225 b240No data
25Odra_9lowland100235115189 a167204.00
26Odra_10lowland83130140164 b179108.00
27Odra_11lowland9614585187 b242136.00
28Odra_12lowland66153160218 b233101.00
29Odra_13lowland75 O;25 B145 O;55B145 O;125 B 251 a O ; 221 a B 233 O;177 B135.00 O;17.30 B
30Ścinawkaupland020140221a5146.30
1 Refers to the section of the river where the photo was taken; 2 0 may indicate fully covered river channel; * river channel has been regulated and water gauge has been rebuilt (zero of water gauge has been changed); O/B Odra/Barycz; a/b above/below mean water level.
Table 4. Descriptions of images analysed in the study.
Table 4. Descriptions of images analysed in the study.
No.RiverRegionImage NameDateCoverage
Water Class 1
Land CoverChannel CharacteristicDetection Disturbances
1 Bug_127/07/20172coniferous forest, scrub, meadowvisible two banksshadows, sunlight reflections
2BugBindugaBug_226/06/20183scrubvisible two banks
3 Bug_326/06/20183coniferous forest, scrubpartially covered two banksshadows, reflections
4Biała LądeckaŻelaznoBiała_127/11/20121scrub, meadow, arable land, discontinuous building, roadsvisible two banksvisible riverbed forms and submerged vegetation
5Biała_221/05/20151scrub, meadow, arable land, discontinuous building, roadspartially covered two banks; submerged barsshadows, visible riverbed forms and submerged vegetation
6 Bystrzyca_108/01/20131broad-leaved forest, coniferous trees, meadow, discontinuous buildingdefence; partially
covered two banks
7 Bystrzyca_208/01/20131broad-leaved and coniferous trees, meadow, roadsdefence; visible two bankslocal hydraulic disturbances;
white water
8Bystrzyca DusznickaSzalejów DolnyBystrzyca_321/08/20131broad-leaved trees, meadow, arable land, roadsdefence; partially covered two banks; sand barsvisible riverbed forms; local hydraulic disturbances; white water
9 Bystrzyca_427/09/20131broad-leaved trees, scrub, meadow, arable land, discontinuous
building, roads
defence; partially
covered two banks
surface waves; visible riverbed forms; submerged vegetation
10 Bystrzyca_529/06/20201broad-leaved trees, scrub, meadow, arable land, roadsdefence; partially
covered two banks
surface waves; white water
11IzeraHala IzerskaIzera30/08/20191coniferous forest and trees, meadowpartially covered two banks; bars (mostly submerged)visible riverbed forms
12KwisaGryfów ŚląskiKwisa29/06/20161coniferous forest, scrub, meadowpartially covered two banksshadows; reflections; surface waves; local hydraulic disturbances; visible riverbed forms and
submerged vegetation
13 ByczeńNysa_130/03/20212broad-leaved trees, meadow, gravel-sand barvisible two banks; bars (partially submerged)surface waves; visible riverbed forms
14Nysa KłodzkaŁawicaNysa_204/07/20191broad-leaved forest, meadowpartially covered two banks; bars (partially submerged);
submerged vegetation
shadows; reflections; surface waves; local hydraulic disturbances; visible riverbed forms and
submerged vegetation
15Nysa_304/12/20191broad-leaved forest, meadow,
arable land
partially covered two banksshadows; surface waves; visible riverbed forms and
submerged vegetation
16OdraŚremNysa_422/06/20211broad-leaved forest, scrub, meadowpartially covered two banksshadows
17 Odra_109/12/20214broad-leaved forest, meadowvisible one bank; groynesshadows
18DąbrowaOdra_210/11/20214broad-leaved forest, scrub, sand barsvisible two bank; groynes;
sand bars
shadows; visible riverbed forms
19Odra_326/07/20213broad-leaved trees, meadow,
arable land
visible two banksshadows, reflections
20 Odra_427/03/20222broad-leaved forest, scrubvisible one bank; groyne
21GogolinOdra_521/11/20202broad-leaved trees, scrub,
arable land, roads
visible one banksurface waves
22Odra_621/11/20203broad-leaved trees, scrub,
arable land, road
partially covered two banks; river regulation structuresshadows; white water
23Odra_722/11/20204broad-leaved forest, scrubpartially covered two bankssunlight reflections; surface waves
24Odra_817/02/20203broad-leaved forest, scrub, roadpartially covered two banksvisible riverbed forms
25KuniceOdra_924/03/20223broad-leaved trees, scrub, buildingvisible one bank; groyneshadows; white water
26PomorskoOdra_1006/12/20213broad-leaved trees, scrubvisible one bank; groynes; submerged barsshadows; visible riverbed forms
27Odra_1128/06/20214scrubvisible one bank
28WyszanówOdra_1207/12/20213broad-leaved trees, scrubvisible one bank; groynes; submerged barsvisible riverbed forms
29Odra/BaryczOdra_1331/01/20223broad-leaved trees, meadowpartially covered three 2 banksshadows
30ŚcinawkaGorzuchówŚcinawka14/10/20201broad-leaved trees, scrub, meadow, arable land, road, water bodyalmost all covered banks (both river and water body)shadows; surface waves; white water
1 percentage of photo covered by water: 1—0–25%, 2—25–50%, 3—50–75%, 4—75–100%; 2 one bank is Oder, two banks are Barycz mouth.
Table 5. Parameters of detector.
Table 5. Parameters of detector.
123
ScenarioKurtosis δ * for One of Distribution2 OR/AND 3
UnimodalMultimodal
S160.10.05OR
S270.10.05OR
S370.10.05AND
S470.150.1OR
S570.150.1AND
S6200.10.05OR
S7200.10.05AND
S8200.150.1OR
S9200.150.1AND
Table 6. Detector calibration results. S1–S9 described in Table 5.
Table 6. Detector calibration results. S1–S9 described in Table 5.
ScenarioOdra_6ŚcinawkaBystrzyca_1Bystrzyca_2Lake_1 1Odra_7Bug_1
DRFHRDRFHRDRFHRDRFHRDRFHRDRFHRDRFHR
S172.8839.4438.6466.6782.9355.2696.0063.3688.8982.4272.7364.1880.7775.58
S271.1934.3838.6464.5882.9352.1196.0052.9488.8981.1872.7362.5080.7774.70
S366.1025.0038.6463.0475.6143.6496.0052.9477.7880.8260.6144.4469.2366.04
S462.7132.7331.8256.2568.2948.1592.0058.1866.6777.3648.4869.2373.0872.46
S562.7122.9231.8253.3365.8532.5092.0044.5861.1174.4242.4239.1365.3861.36
S671.1930.0038.6463.8382.9347.6996.0057.1488.8980.4972.7354.7273.0872.86
S755.9321.4338.6462.2278.0533.3394.0046.5955.5684.6254.5528.0069.2363.27
S862.7126.0031.8253.3368.2934.8892.0047.7366.6773.9148.4856.7665.3869.64
S950.8521.0531.8254.8465.8527.0390.0040.7938.8981.5839.3927.7861.5458.97
1 One additional image was used for detector calibration, not analysed in this paper.
Table 7. Waterline detection statistics.
Table 7. Waterline detection statistics.
No.RiverRegionImage Name250 × 250 px125 × 125 px
DRFHRDRFHR
1BugBindugaBug_165.3861.3655.3281.02
2Bug_265.3826.0951.2254.35
3Bug_344.126.2541.2731.58
4Biała LądeckaŻelaznoBiała_140.0082.7654.5581.25
5Biała_239.3962.8631.8276.67
6Bystrzyca DusznickaSzalejów DolnyBystrzyca_165.8532.5070.9353.44
7Bystrzyca_292.1643.3789.6957.14
8Bystrzyca_372.0940.3861.1849.02
9Bystrzyca_454.845.5659.3825.49
10Bystrzyca_536.3615.7934.0744.64
11IzeraHala IzerskaIzera42.8625.0040.0036.84
12KwisaGryfów ŚląskiKwisa22.220.0024.4251.16
13Nysa KłodzkaByczeńNysa_188.5748.3357.6153.10
14ŁawicaNysa_257.4542.5542.4253.33
15Nysa_351.2828.5742.6765.22
16ŚremNysa_454.5545.4534.8562.90
17OdraDąbrowaOdra_160.8741.6745.2470.77
18Odra_246.6741.6742.8653.45
19Odra_360.6028.5767.2423.53
20Odra_463.305.0060.4219.44
21GogolinOdra_547.6265.5250.0071.01
22Odra_662.7122.9240.6848.39
23Odra_742.4239.1335.3870.89
24Odra_878.5715.3859.2642.86
25KuniceOdra_961.9038.1065.1241.67
26PomorskoOdra_1059.0950.0056.1070.89
27Odra_1155.5656.5253.1283.00
28WyszanówOdra_1266.6760.7857.1466.97
29Odra_1361.9038.1043.1850.65
30ŚcinawkaGorzuchówŚcinawka31.8253.3337.3571.03
std15.3419.8013.5317.09
range69.7882.7665.2763.56
Table 8. Strengths and weaknesses of RGB-based coastline and water body detection methods.
Table 8. Strengths and weaknesses of RGB-based coastline and water body detection methods.
ReferenceStrengthWeakness
Namikawa et al. (2016) [12]Simple method without user intervention and with little computational powerNoisy pixels must be eliminated
Niedzielski et al. (2016) [7]Highly available
(RGB consumer-grade camera mounted on UAV needed)
Manual digitization needed
Wei and Zhang (2016) [16]Effective and robust methodShadow sensitive method
Kröhnert and Meichsner
(2017) [18]
Highly available
(smartphone needed)
Method sensitive on users motions and vegetation moving in the wind
Eltner et al. (2018) [19]Highly available
(low-cost camera needed)
Direct access to river vicinity needed, GCPs and total
station required
Ridolfi and Manciola (2018) [23]Rapid and inexpensive procedureGCPs required
Tymków et al. (2019) [24]High overall accuracyAdditional sensor required (best results obtained for RGB+TIR combination)
Eltner et al. (2021) [21]Highly available
(low-cost camera needed)
Direct access to river vicinity needed, GCPs and total
station required
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Walusiak, G.; Witek, M.; Niedzielski, T. Histogram-Based Edge Detection for River Coastline Mapping Using UAV-Acquired RGB Imagery. Remote Sens. 2024, 16, 2565. https://doi.org/10.3390/rs16142565

AMA Style

Walusiak G, Witek M, Niedzielski T. Histogram-Based Edge Detection for River Coastline Mapping Using UAV-Acquired RGB Imagery. Remote Sensing. 2024; 16(14):2565. https://doi.org/10.3390/rs16142565

Chicago/Turabian Style

Walusiak, Grzegorz, Matylda Witek, and Tomasz Niedzielski. 2024. "Histogram-Based Edge Detection for River Coastline Mapping Using UAV-Acquired RGB Imagery" Remote Sensing 16, no. 14: 2565. https://doi.org/10.3390/rs16142565

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop