Next Article in Journal
Holocene Sea Level Recorded by Beach Rocks at Ionian Coasts of Apulia (Italy)
Previous Article in Journal
Interpretation of Hyperspectral Shortwave Infrared Core Scanning Data Using SEM-Based Automated Mineralogy: A Machine Learning Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Insights into Segmentation Methods Applied to Remote Sensing SAR Images for Wet Snow Detection

1
Univ. Grenoble Alpes, Université de Toulouse, Météo-France, CNRS, CNRM, Centre d’Études de la Neige, 38000 Grenoble, France
2
Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LJK, 38000 Grenoble, France
3
CNES, 31401 Toulouse, France
*
Author to whom correspondence should be addressed.
Current address: Météo-France, DIRCE/SERVICES, 69500 Bron, France.
Geosciences 2023, 13(7), 193; https://doi.org/10.3390/geosciences13070193
Submission received: 15 May 2023 / Revised: 21 June 2023 / Accepted: 23 June 2023 / Published: 27 June 2023
(This article belongs to the Section Cryosphere)

Abstract

:
Monitoring variations in the extent of wet snow over space and time is essential for many applications, such as hydrology, mountain ecosystems, meteorology and avalanche forecasting. The Synthetic Aperture Radar (SAR) measurements from the Sentinel-1 satellite help detect wet snow in almost all weather conditions. Most detection methods use a fixed threshold to a winter image ratio with one or two reference images (with no snow or dry snow). This study aimed to explore the potential of image segmentation methods from different families applied to Sentinel-1 SAR images to improve the detection of wet snow over the French Alps. Several segmentation methods were selected and tested on a large alpine area of 100 × 100 km2. The segmentation methods were evaluated over one season using total snow masks from Sentinel-2 optical measurements and outputs from forecasters’ bulletins combining model and in-situ observations. Different metrics were used (such as snow probability, correlations, Hamming distance, and structure similarity scores). The standard scores illustrated that filtering globally improved the segmentation results. Using a probabilistic score as a function of altitude highlights the interest in some segmentation methods, and we show that these scores could be relevant to calibrate the parameters of these methods better.

1. Introduction

Image segmentation is a widely used technique to analyze images and extract relevant information about a phenomenon in many fields, such as biomedicine, spatial remote sensing, and video communication. Two advantages of segmentation techniques are that they make image processing relatively straightforward and they highlight a significant amount of information contained in only a part of an image. Image segmentation divides an entire image into disjointed and homogeneous regions based on criteria. There are more than a thousand image segmentation methods or algorithms, most of which are suited for a specific application context and some more easily adaptable to many applications. The following are examples of types of segmentation methods: (1) thresholding-based segmentation, (2) segmentation of the image based on shape or edge detection, (3) region growth-based methods, (4) energy minimization-based segmentation and (5) dynamical image segmentation. Some of these methods are described in [1,2,3,4,5,6,7,8,9,10,11,12,13,14]. These methods are publicly available open-source code, such as in the Python toolbox scikit-image [15].
The Sentinel-1 mission, part of the European Space Agency’s Copernicus program, consists of a constellation of two polar-orbiting satellites, making all-weather, day and night, C-band Synthetic Aperture Radar (SAR) observations. Sentinel-1 observes the Earth in four different modes, facilitating the use of SAR data in various applications. An SAR system is a coherent side-looking radar system used to monitor the Earth with high-resolution images thanks to its capacity to electronically simulate a huge antenna. In coherent imaging systems, a speckle is a signal-dependent noise from a coherent summation of backscattered signals from multiple distributed targets.
More specifically, speckle noise looks like grainy salt and pepper in radar images, and the presence of these patterns reduces the clarity of the images. Despite speckle effects, the SAR measurements of Sentinel-1 have relevant information content, particularly of interest for monitoring cold regions. Studies by [16,17,18,19,20,21,22,23,24,25,26], among many others, exploited these observations to better understand the state and evolution of wet snow, including its spatial variability and how it changes over time.
Methods for detecting wet snow using SAR data usually rely on a change detection approach with an image ratio that uses two images with and without snow, assuming that backscatters of wet snow are generally smaller than those of dry snow or snow-free areas. The ratio is calculated for the same study area, and a threshold is then applied to derive a snow mask [27]. One of the reference studies on the subject is by [16], who used ratios of Sentinel-1 images with a threshold of −2 dB combining VV and VH polarizations. Using a 2–3 dB threshold is not relevant everywhere and may introduce detection errors in some situations, depending on, for example, its surface type, or its roughness (see, for example, [28]). The SAR signal variation can be limited in forest areas, sometimes being less than the 2–3 dB range.
The main objective of this study was to explore various families of image segmentation methods for snow monitoring in mountain areas. The segmentation techniques were carefully selected, based on different criteria, and tested on the Sentinel-1 SAR image time series over a season characterised by wet snow. The aim was to find ways to improve wet snow detection methods through simple techniques that are easy to implement and require low computational costs. The tested methods apply to other topics, making them versatile and practical in various applications. Additionally, we aimed to evaluate the segmentation results by selecting independent optical images as targets and computing several evaluation scores. This provides a comprehensive assessment of the effectiveness of the segmentation methods.

2. Data and Method

2.1. Location and Time Period

The area under investigation was part of the French Alps, as shown in Figure 1. This figure also shows the elevation and massif limits of the study area. The test zone was the 31TGL tile (red frame in Figure 1), which fully covered seven massifs of the French Northern Alps and partially covered five other massifs. Our area was quite a heterogeneous alpine zone with complex topography and terrain and several land cover types, including forests, bare ground, glaciers and wetlands. The study period covered the 2017–2018 winter season, which was characterized by very high snow excess due to many disturbed passages [29,30]. Very cold periods (26 and 27 February 2018) followed milder periods, which favoured the presence of wet snow and episodes of rain on snow. Finally, the spring of 2018 saw consistent melting periods from April, with a few scattered colder episodes.

2.2. Satellite Data

Satellite observations used in this study were the C-band backscatter coefficients of the Sentinel-1 missions launched by the European Space Agency (ESA) in the framework of the Copernicus program. The Sentinel-1 mission comprises two satellites, Sentinel-1A and Sentinel-1B, which provide continuous observations over France every six days (every twelve days since the end of 2021, with the loss of Sentinel-1B). Sentinel-1A and Sentinel-1B were launched in 2014 and 2016, respectively. Sentinel-1 SAR images, due to the side-looking imaging geometry of the instruments, are subject to geometry-induced radiometric distortions (shadow, layover and foreshortening effects) and were excluded from our study. Digital elevation models and sensor acquisition parameters are necessary to simulate layover, foreshortening, and shadow regions. More information can be found in the studies of [31,32]).
We used Level-1 Ground Range Detected (GRD) products from the PEPS platform (peps.cnes.fr accessed on 1 January 2021). Sentinel-1 data are also available from the ESA web site (https://scihub.copernicus.eu/dhus/ accessed on 1 January 2021). Sentinel-1 data have a spatial resolution of 20 m in both VV and VH polarizations. We used Sentinel-1 relative orbit A161 (ascending, late afternoon) data relevant to our test site. We used the Centre National d’Etudes Spatiales (CNES) computing facilities to preprocess the Sentinel-1 data. The preprocessing included thermal noise removal, speckle filtering using the multi-temporal filtering of [33] (optional), radiometric calibration, and terrain correction using the 30 m SRTM digital elevation model. As a result of this phase, we obtained a stack of images on MGRS tiles of approximately 100 × 100 km2.
Figure 2 shows a flowchart of this preprocessing. The Sentinel-1 images used in this study were from the relative orbit A161 (ascending orbit) and covered the period from mid-December to the end of June 2018 (an image every six days). We also used the Theia snow products derived from Sentinel-2 data when available over our test zone (in cloud-free situations) [34]. The Sentinel-2 mission, developed by the European Space Agency (ESA), is composed of two satellites (Sentinel-2A and Sentinel-2B) operating in the same orbit (786 km) amd were launched in 2015 and 2017, respectively. The CNES and CESBIO develop these snow retrieval products, which are freely available from the Theia website (https://theia.cnes.fr accessed on 1 January 2021). The Sentinel-2 snow retrieval products used in this study were those of January (05, 15, 25, 30), February (04, 09, 14, 19, 24), March (21, 26, 31), April (05, 15, 20, 25), May (05, 15, 25) and June (09, 14, 19) 2018. The Sentinel-2 total snow products were used as target references to evaluate Sentinel-1-derived wet snow maps.

2.3. BERA Bulletins

To further evaluate the overall weather conditions over our study period, we used the information from the BERA bulletins (Bulletin d’Estimation du Risque d’Avalanche), provided daily in winter by the snow forecasters of Météo-France. These bulletins combine data from model outputs, observations of different types and human expertise to provide information about the state of the snowpack for the French massifs (23 massifs in the Alps, 21 in the Pyrenees and two massifs in Corsica). Figure 3, Figure 4 and Figure 5 provide the available information for the six massifs included in the 31TGL tile (the massifs of Belledonne, Chartreuse, Bauges, Maurienne, Vanoise, Grandes-Rousses and Beaufortin) with the continuous snow limit for the northern and southern orientations of each massif (Figure 3 and Figure 4) and the rain–snow limit in cases of precipitation for the Grandes-Rousses massif (Figure 5). The rain–snow line curve indicated frequent rainy weather and rain-on-snow events over the study period. This data helped to infer wet snow occurrences, and Sentinel-1’s wet snow products could be synthesized into altitude–time-orientation diagrams at the scale of each massif, following the method presented by [26]. The resulting graphs could then be conveniently compared to the BERA bulletins.

2.4. Methods

2.4.1. Selected Segmentation Methods

There exists a vast number of methods for image segmentation. The challenge was in our selecting effective segmentation strategies for our problem, while maintaining diversity among the studied methods. In addition, we also needed to explore approaches initially developed for different application contexts, which might be of interest for satellite remote sensing. Figure 6 displays the various segmentation methods tested in this study. Each method’s unique acronym is used throughout the article.
(a)
Chan-Vese method
The Chan–Vese method (called CV hereafter), detailed in [35], belongs to the family of contour evolution segmentation methods. It provides a contour C separating the image into two sub-areas which are as homogeneous as possible. This is achieved by minimization of the cost function
F = μ · L e n g t h ( C ) + ν · A r e a ( i n s i d e ( C ) ) + λ 1 i n s i d e ( C ) | u 0 ( x , y ) c 1 | 2 d x d y + λ 2 o u t s i d e ( C ) | u 0 ( x , y ) c 2 | 2 d x d y ,
where C is the contour to be determined and  u 0 ( x , y )  is the value of the pixel with coordinates x et y. The constants  c 1 c 2 , which depend upon C, are mean values inside and outside C, respectively. The parameters  λ 1 , λ 2 , μ > 0  and  ν R  are fixed by the user.
The CV algorithm belongs to the class of level-set methods, i.e., the contour C corresponds to the level curve of a function  ϕ  vanishing on C (and having different signs at both sides). The cost function F is expressed as a function of  ϕ , regularized, and minimized iteratively using a gradient descent. As iterations go on, the contour becomes closer to the structure to be segmented.
The CV method has several advantages. The level-set approach is appropriate to handle possible topological changes of the contour, while the latter evolves towards the final segmenting contour. It is based on the minimization of an energy defined by F, which can be supplemented by ad-hoc terms, depending on the application, possessing several tunable parameters. The parameters  λ 1 λ 2  facilitate weighting the variance of each sub-area. The parameter  μ  has an impact on the length of the contour. The smaller  μ  is, the longer and more tortuous the interface can be. This allows one to better track the details (with the risk of increasing noise), while a higher value of  μ  tends to smooth the contour. The  ν  parameter can be used to increase or decrease the area inside the contour.
In this paper, we used the version of the CV algorithm available in the Scikit-Image python library. In this version, the weighting term, according to the area inside the contour, is not taken into account (i.e.,  ν = 0 ) and we assigned the values  μ = 0.3 λ 1 = λ 2 = 1  to the other coefficients. The iterations were stopped either when a maximal number of iterations was reached (fixed at 200 in this study) or when the root mean square variation between successive iterates of  ϕ  fell below a specified tolerance ( 5 × 10 4  in this work).
(b)
Random Walker method
This method (described in [36]) is part of the family of methods based on graph partitioning. The version that we used comes from the python library scikit-image. This algorithm can segment an image into several categories (in our case, presence or absence of wet snow), but it requires a priori knowledge of the elements of each category (called the “seeds”) before starting the segmentation. Once the seeds have been chosen, the algorithm calculates, for each undetermined pixel, the probability that a “walker” starting from this pixel, and moving randomly from one pixel to another, would first reach one of the seeds. This type of random displacement is similar to the anisotropic diffusion mechanism. In this algorithm, the diffusivity coefficient is larger for neighboring pixels having similar values and scattering is more difficult across areas of strong gradients. Computing the above probabilities amounts to solving a sparse linear system of equations corresponding to a discrete Dirichlet problem (several numerical methods are implemented within the scikit-image toolbox, and we chose the conjugate gradient method with Jacobi preconditionner and tolerance fixed to  10 5 ). A segmented image was obtained by assigning each pixel to the most probable seed. In the following, we denote by RW the Random Walker algorithm.
(c)
Random Forest method
It is possible to train an algorithm on a set of segmented images so that it can then recognize the wet snow areas on a new image. We used the supervised learning algorithm by decision (Random Forest), implemented in the python library scikit-learn, and based on the work of [37]. We chose this algorithm because it is one of the most popular in machine learning and requires little preprocessing. However, many other methods could have been tested, such as neural network learning. This algorithm uses randomly constructed decision trees trained on different subsets of data. Each tree taken individually has a weak classification capacity (we call it a weak classifier) close to a random classification. It is the union of these decision trees in a “forest” (average of the trees) that has a strong classification capacity. The choice of the number of classifiers is, therefore, an important element and is strongly dependent on the problem. In our case, the number of trees was fixed to 20 (100 by default) and the depth of the trees limited to 10.
The efficiency of this algorithm is mainly dependent on the training image sets. In our case, we selected 100 × 100 pixel images from our full image time series representing either wet snow or snow-free areas (see Figure 7). We also tested adding a snow edge image to increase the number of pixels categorized as wet snow. In this article, RF1 was the trained version without borders and RF2 was the trained version with edges (See Figure 6 for a simplified scheme of all methods tested in this study).
(d)
Fixed thresholding
Many studies have used a threshold of 2–3 dB to separate dry snow or snow-free areas from wet snow. The algorithm employed by [19] used a fixed threshold to map wet snow using a reference image taken under dry snow or frozen ground conditions. Our panel of segmentation methods included the 2 dB threshold, which is commonly used for snow monitoring. However, it is important to note that the use of a fixed threshold, as reported by [28], is not universally applicable and may lead to detection errors, depending on factors such as snow surface roughness and liquid water content. Since most segmentation methods include, by definition, a step of image filtering, we evaluated the fixed thresholding method with and without filtering the images with a simple Gaussian filter. The application of this filter consists of computing, in each pixel of the image, the average of the neighbouring pixels, weighted by the distance to the central pixel. The further away the pixels are, the smaller the impact of their values on the central value. By construction of the Gaussian filter, 95 % of the value assigned to the central pixel came from pixels located at a distance of  2 σ σ  being a distance in pixels that we provided as a parameter to the Gaussian filter. Figure 8 shows the effect of Gaussian filtering on the fixed thresholding segmentation result with different  σ  values.
By comparison with the optical image of 20 April 2018, we chose the  σ = 5  option. The segmentation results with Gaussian filtering are noted 2dBF and those without filtering are noted 2dB. Figure 6 shows a simplified scheme of all methods tested in this study.
(e)
Otsu’s adaptive thresholding
Otsu’s method [13] is basically the same as the fixed threshold method, except the threshold is defined for each image from the histogram of the image value distribution. This algorithm is implemented in the python library scikit-image (https://github.com/scikit-image/scikit-image accessed on 1 January 2021). The selected threshold is the one that minimizes the intraclass variance defined by:
σ w 2 = σ 0 2 i = 0 t 1 p ( i ) + σ 1 2 i = t L 1 p ( i ) ,
where t is the threshold,  σ 0 2  the variance of values lower than t σ 1 2  represents variance of values greater than t p ( i )  is the probability of occurrence of value i (histogram class) and L the number of histogram classes. As for the fixed thresholding, Otsu’s method was applied both with and without Gaussian filtering ( σ = 5 ). Hereafter, Otsu’s method with Gaussian filtering is denoted by OtsuF.
(f)
RGB color adaptative thresholding
We also tested a methodology recently developed to detect avalanche debris and wet snow using a color space segmentation technique [38,39]. The method involves the use of SAR images acquired at different winter dates and the careful selection of 1 or 2 reference images. The technique relies on the use of the Hue Saturation Value (HSV) color space to store SAR images from different dates where the H(ue) dimension represents the color, the S (saturation) dimension represents the dominance of this color, and the V (value) dimension represents the brightness. The HSV color space is particularly relevant for identifying contrasts in images. Therefore, the color detection algorithm seeks the position of the color and its luminance. The detection of wet snow is equivalent to detecting shades of a specific color (which represent areas where there is a drop in values of backscatter). There were two main steps we followed. Firstly, an RGB composition was made with 2 or 3 images (R: Reference-image1, G: image day D, B: reference-image2). RGB is for Red, Green, Blue, and colors are represented in their proportions of red, green, and blue. The resulting RGB image was then converted into HSV space. With this configuration, detecting wet snow implied identifying shades of magenta (which indicated areas where there was a drop in backscatter values compared with reference images). We fixed a magenta color variation range to detect wet snow optimally. An example of the detection results is shown in Figure 9, where wet snow extent is shown for 27 April 2018 over the Belledonne massif. The method is noted as “Magenta” (M) hereafter. A version of the method with Gaussian filtering was also tested and is noted as “Magenta Filtered” (MF) hereafter.

2.4.2. Scores

To evaluate wet snow outputs from the segmentation methods, we compared them to total snow masks from optical measurements using several scores. These scores could measure disagreements between binary images, consider the distribution of pixels, and consider the presence of noise.
The list of the used scores is as follows:
  • Correlation: This measures any possible statistical link between our two variables (total snow and wet snow). Correlations were computed using the following formula:
    r = i = 1 n ( S i S ¯ ) ( O i O ¯ ) i = 1 n ( S i S ¯ ) 2 i = 1 n ( O i O ¯ ) 2
    where S is the binary wet snow product (output of segmentation), O is the binary total snow product from Sentinel-2, n is the number of pixels of S and O
  • Contingency table: This was used to count the different possible combinations of pixels of the optical snow image and those of the “predicted” one (output of SAR segmentation). Hereafter, the following denotations apply: a the number of cases “observed” by optical data and predicted by the SAR segmentation method, b refers to the cases not observed and predicted, c refers to the cases observed and not predicted, d to the cases not observed and not predicted, and n is the total number of pixels. These quantities are summarized in Table 1 and were used to obtain a variety of other scores listed below.
    a-
    Hamming Distance: This is a mathematical distance that corresponds to the proportion of pixels out of agreement between two binary images. It varies between 0 and 1, 0 being a perfect match between the binary images.
    The Hamming distance is calculated as follows:
    h = b + c n
    b-
    False Alarm Rate (FAR): This score measures the proportion of false snow detection cases with respect to the number of snow pixels detected. It is defined by:
    F A R = b a + b
    c-
    True Detection Rate (Hit Rate): This score measures the proportion of snow detection cases that are true with respect to the number of snow pixels observed. It is defined by:
    H R = a a + c
    d-
    Heidke Skill Score (HSS): This score measures the benefit of SAR image segmentation compared to a random snow pixel distribution. It varies between   and 1, with a negative value corresponding to a degraded detection compared to a random distribution, a null value corresponding to no benefit from the segmentation of the SAR images and a value of 1 being equivalent to perfect correspondence between the two images.
    HSS is defined as follows:
    H S S = 2 ( a d b c ) ( a + c ) ( c + d ) + ( a + b ) ( b + d )
  • Structural Similarity Index (SSI): This score takes into account the variations, and the differences of structure in two images. It was built by [40], who pointed out that the Mean Square Error, commonly used to differentiate two images, does not take into account the similarities of structure between images. SSI takes the form
    S S I = l ( S , O ) · c ( S , O ) · s ( S , O ) ,
    where l is based on mean images, c on standard deviation of images and s on correlations between images.
  • Probabilistic Score for Satellite Products (PSSP): The authors of [41] introduced this new score, which is particularly useful for assessing snowpack characteristics in mountainous areas, where factors such as altitude, slopes, and terrain orientation are critical. The PSSP score facilitates the integration of satellite products with varying spatial resolutions and types. By calculating snow probability curves from Sentinel-1 and Sentinel-2 images at observation dates close to each other, the PSSP score provides a comprehensive view of the snowpack’s essential features at the mountain massif scale. RMS errors or correlations are computed to measure the distance between the obtained probability snow product curves.
In addition to the scores already presented, it was necessary to measure the execution time of each segmentation algorithm to thoroughly compare their performances. A method that provides excellent results but takes too long to run may not be practical for processing large amounts of data.
The runtime is dependent on the configuration of the machine used. Here, we provide some details for comparison purposes: cpu 20 cores of 2.30 GHz, 96 GB RAM memory.
With the selected scores, we aimed to distinguish between segmentation methods using correlation values, Hamming distance, HR rates, FAR rates, HSS values, and probability curves. The ideal approach should have high correlation values, low Hamming distance, high HR rates, low FAR rates, high HSS values, and the best-fitting probability curves with minimal runtime.
Figure 10 shows a flowchart of all data processing. As illustrated in this figure, the reference image was 25 August 2017. This article does not address the impact of reference image selection on wet snow detection. The study in [42] showed that examining the cross-correlations between images calculated over an entire time series allows for a better selection of reference images (with less similarity to the winter period). This approach was tested in [38] to better detect avalanche debris. A further study, regarding reference image impact on wet snow, will be the subject of dedicated research.

3. Results

3.1. Focus on a Situation

Let us first look at the results for 22 April 2018, which corresponded to a generalized melting period. On that day, only 5 in-situ measurement stations, belonging to the Météo-France station network and located above 2800 m, measured a temperature below 5 °C at mid-day. We, therefore, considered that all the areas located below 3000 m (which represented 98.2% of our study area) were associated with wet snow. We used the binary total snow product from Sentinel-2, as of 20 April 2018, to compare the segmentation results. To better evaluate the results of the different segmentation methods, let us zoom in on the Belledonne Massif area, although the segmentation was performed on the entire 31TGL tile. Figure 11 shows the segmentation results of the CV, RW, 2 dB, 2 dBF, M, RF1 and RF2 methods (see Figure 6 and the Section 2 for a detailed description of segmentation methods) overlaid on the Sentinel2 total snow extent. On these maps, the grey colour is for situations where no snow detection was performed by either Sentinel-11 or Sentinel-2, the blue colour is for cases where snow and wet snow were detected, red colours are for a situation where snow was detected by Sentinel-2 but not detected by Sentinel-1, and, finally, the yellow colour is for cases where wet snow was detected by Sentinel-1, and Sentinel-2 indicated snow free pixels.
The snow zone had many apparent false alarm zones (in yellow). These areas appeared on shaded northern slopes where there was snow, which did not appear on the optical snow mask. We double-checked this by looking at high-resolution optical images, to ensure it was not a mistake in the segmentation methods. We also noted that methods without prior filtering (M, 2 dB), or low image filtering (CV) prior to segmentation, produced noisier wet snow binary masks than the other methods. The degree of filtering of the Chan–Vese method was dependent on the value of the parameter  μ . However, these methods did a better job of preserving the contours of snow areas with maximum detail. All the methods tended to under-detect snow edges. The Random Forest and Random Walker methods were the ones that were the closest to the observed mask of Sentinel-2, but at the cost of significant smoothing of the contours and loss of details within the snow-covered area.
In Table 2, scores for the whole 31TGL tile are given. One can notice that the methods which filtered the most (RW) obtained the best structure similarity scores, which was highly correlated to the amount of noise remaining on the segmented image (see Section 2.4.2). The methods combining a reasonable structure similarity rate and good detection obtained the best correlation scores, Hamming distance and Heidke Skill Scores.

3.2. Further Evaluations

Regarding the PSSP score, Figure 12 shows the snow occurrence probability curves obtained for three successive Sentinel-2 observation dates (pink curves), with no clouds recorded over the Grandes-Rousses massif: 15 April 2018, 20 April 2018, and 25 April 2018. The figure also shows snow probability curves obtained from all studied segmentation methods using the closest Sentinel-1 images (16, 22 and 28 April 2018). The sentinel-2 curves indicated probabilities of more than 80% of snow above 2000 m. Depending on observation dates and altitude ranges, the probability curves from the segmentation methods illustrated variable agreements with the Sentinel-2 curves, which were considered a reference. For the date of 15 April, we noted a general agreement between most curves for altitudes below 1500 m for which no significant amount of snow was observed, except for the Magenta method (probably sensitive to agricultural wetlands) and the 2 dB method, which overestimated the occurrence of snow. Between 1500 and 2000 m and above 2000 m, the curves of the different segmentation methods could be well differentiated, with a tendency for the unfiltered methods to underestimate snow compared to Sentinel-2. Wet snow was underestimated with the Otsu-based methods (Otsu, OtsuF).
While the situation was similar for 20 April, comparisons with 25 April indicated an overestimation of snow by the filtered methods (except for OtsuF, which did the opposite). In the latter case, however, the increased time lag (three days) between the SAR and optical images may have influenced the results. We evaluated the degree of similarity between the different probability curves by computing the Root Mean Square (RMS) values of the differences between the probability curves (Sentinel-1 minus Sentinel-2). The results of these calculations are presented in Figure 13 and Figure 14. Different altitude ranges were chosen as, for the Grandes-Rousses massif, nearly 70% of the pixels had an altitude <2000 m, 20% had an altitude between 2000 and 2500, 9% had an altitude between 2500 and 3000 m and only 1% had an altitude above 3000 m. For the RMS values and altitudes below 2000 m, errors were smaller for the CV and MF methods, followed by the RF2 method. The errors seemed to be higher with the Otsu method but RMS values for all methods change depending on the specific dates being considered. Considering all altitudes, the method showing the highest similarity to Sentinel-2 was CV, followed by 2 dBF, MF and RF2.
Now, we consider the other scores calculated over the entire season to evaluate the sensitivity of the methods to variations in weather and snow conditions. Figure 15 shows the time evolution of correlations, hamming distance and Heidke Skill scores for different segmentation methods. Figure 15a shows that the correlation rates for all the methods varied significantly over the season. The correlation rate was best during periods of generalized melting due to the abundance of wet snow in the domain. Let us recall that the optical snow mask did not discriminate the presence of wet or dry snow, making comparing data over the winter season more difficult. The segmented surface was generally smaller than the surface of the observed snow mask due to dry snow. We had a few cases of negative correlation in winter, which corresponded to situations where areas with wet snow corresponded mostly to areas without snow in the optical mask. One possible explanation was the gap in dates between the optical and SAR images. During the melt period, one could see that the methods with low noise filtering had lower correlation rates. The Random Forest method, with edge zones taken into account, gave good results on 22 April 2018, but presented more fluctuating correlation rates, due to its sensitivity to irrigated agricultural areas. Against expectations, the Otsu method with filtering, which used an adaptive threshold, ultimately never achieved better scores than the fixed thresholding with filtering. Given these scores, finding a suitable threshold for the whole image was not a determining factor.
Figure 16 shows the distribution of wet snow as a function of altitude for the Belledonne massif, northern orientation, and the segmentation methods 2dBF, RF, MF and RW, respectively, according to the method introduced in [26]. Figure 17 is similar to Figure 16 but for the northern orientations. Melt lines generated in this way at the scale of the massif of Belledonne could be compared with the information from the BERA of this massif. Over the melt period, we can see that the continuous snow line aligned with the altitude of the maximum snow gradient for the four methods. The presence of some noise for the MF segmentation method did not interfere with the estimation of the altitude of the wet snow zone. For methods that best filtered the noise, the wet snow gradient was much sharper, and there were few false detections at low elevations, even in summer. We also noted that the warmer rainy episodes in January were well associated with wet snow. The diagrams calculated from the different methods were very comparable, despite some differences (presence or absence of snow at low altitudes, and over-detection of snow at the end of the season for some methods). In some classes, the number of pixels associated with wet snow was slightly more significant for the MF and RW methods.
These diagrams highlight the case of 27 February, which was characterized by very low temperatures over the entire area, with no melting observed during the day and no rain over recent days. All methods detected wet snow at low altitudes. These were cases of frozen ground leading to a wet snow detection error [43].
If we look at cumulative scores over the melt period (see Table 3), the RF1 method did better than the tested methods. It had a much lower false alarm rate, in particular. Methods with no or weak filtering had the highest false alarm rates. On the correlation criterion, several methods performed well with correlations above 0.7 (2 dBF, MF, RW, RF1, RF2, OtsuF).
The Chan–Vese method, as tested in this study, had two limitations: in cases of low snow coverage, the chosen setting tended to over-segment the scene, resulting in excess detection at the end of the season. One could correct this problem by adjusting the parameters  ν  and  λ i . The second issue concerned the possible inversion of the segmented classes since the method decomposed two classes without labeling them, which could lead to confusion between snow-covered and snow-free areas for specific dates. These problems are highlighted in Figure 15, wherein can be observed a degradation of the CV method performances (in terms of correlation, Hamming distance and Heidke Skill Score) at the end of May 2018. A class inversion occurred with the CV method on 16 January, leading to a relative peak in the Hamming distance (decrease in correlation and Heidke Skill Score), unlike the other segmentation methods.
The methods which reduced noise effectively consistently achieved the best scores. If we focus on the top 3 methods, we notice that their scores were very close, although they were very different. The level of smoothing (much higher for the Random Walker method) had little influence on the correlation rate, as long as the noise was reduced. It is worth noting that methods such as RF2, MF, and 2 dBF were of particular interest because they saved the contours of the wet snow areas as much as possible. Regarding execution time, the RW and CV methods were costly in terms of computation time, but they offered different degrees of freedom and ways of improvement. It is worth noting that no optimization was undertaken during this study and that these methods could be optimized numerically. Further optimization work might lead to different conclusions.
We selected a range of segmentation techniques that complemented each other and were straightforward to implement. We did not aim to provide an all-encompassing analysis and comparison of every segmentation method available. The proposed comparative study can still be helpful for users in selecting the most suitable option for their specific applications.

4. Conclusions

Wet snow cover monitoring is one of the case use of Sentinel-1 satellite images. Due to their high spatial resolution and short revisit time, they are ideal for tracking the dynamics of wet snow. SAR measurements can detect wet snow in almost all weather conditions, making them a valuable complement to optical measurements, especially in cloudy situations. This study evaluated the possibility of using segmentation methods for detecting wet snow in Sentinel-1 SAR images and their abilities to track seasonal changes. Several segmentation methods from different families were used and applied to the Sentinel-1 SAR image time series over the 2017–2018 period for a steep mountain area in the French Alps.
The study compared wet snow segmentation results, optical snow masks, and information from Météo-France forecast bulletins. The results showed that the segmentation methods effectively identified wet snow in the study area and indicated the significance of filtering images before segmentation. Additionally, we show that the PSSP score is valuable for exploring segmentation results by comparing snow probability curves on a mountain massif scale. We also show that some segmentation techniques overestimate snow at lower altitudes while others underestimate snow at higher altitudes. The PSSP score could help guide the calibration of segmentation methods and could fine-tune the parameters to obtain the best possible segmentation results.
It is important to note that some scores used in this study might favour techniques that create a smoother wet snow mask rather than methods that retain some variability in the wet snow area contours. Therefore, it is essential to maintain a close connection between the segmentation techniques discussed in this study and the metrics employed to evaluate effectiveness. Our study also questions the need to use existing distances or to develop new scores to better discriminate between methods that preserve the variability of wet snow zone contours. Some techniques used in this study (such as CV and RW) have great potential for image segmentation but need further investigation, particularly with a view to optimizing the computation time and calibrating the method better regarding initial conditions, parameters, implementation and possible generalizations. Additional work is in progress to explore the possibilities of the Chan–Vese and Random Walker methods in more detail.
It is essential to consider the area and period studied and the evaluation methods used when analyzing the results. To gain a more comprehensive understanding of the findings, conducting tests over an extended period, ideally spanning several winter seasons, is necessary.

Author Contributions

Research design, F.K., G.J.; Data analysis, A.G., G.J., F.K.; Data Preprocessing: P.D., F.K.; Result analysis, A.G., F.K., G.J, Writing—Original Draft Preparation: F.K., G.J., A.G.; Writing—Review Editing: F.K., G.J. All authors have read and agreed to the published version of the manuscript.

Funding

The study was a part of the SHARE project and it received funding from CNES through their research call for projects APR.

Data Availability Statement

The data and diagrams generated in this article can be obtained on request from the corresponding author.

Acknowledgments

The authors sincerely thank the three anonymous reviewers for their valuable feedback, which greatly enhance the article. The authors would like to acknowledge the support from the National Centre for Space Studies (CNES) in providing computing facilities and access to SAR images via the PEPS platform.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shang, R.; Lin, J.; Jiao, L.; Li, Y. SAR Image Segmentation Using Region Smoothing and Label Correction. Remote Sens. 2020, 12, 803. [Google Scholar] [CrossRef] [Green Version]
  2. Li, M.; Wu, Y.; Zhang, Q. SAR image segmentation based on mixture context and wavelet hidden-class-label Markov random field. Comput. Math. Appl. 2009, 57, 961–969. [Google Scholar] [CrossRef] [Green Version]
  3. Huang, S.; Huang, W.; Zhang, T. A New SAR Image Segmentation Algorithm for the Detection of Target and Shadow Regions. Sci. Rep. 2016, 6, 38596. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Chino, D.Y.T.; Avalhais, L.P.S.; Rodrigues, J.F.; Traina, A.J.M. BoWFire: Detection of fire in still images by integrating pixel color and texture analysis. In Proceedings of the 28th SIBGRAPI Conference On Graphics, Patterns and Images, Salvador, Brazil, 26–29 August 2015. [Google Scholar]
  5. Linblad, T.; Kinser., J. Image Processing Using Pulse-Coupled Neural Networks. Applications in Python, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  6. Johnson, J.; Padgett, M.; Friday, W. Multiscale image factorization. In Proceedings of the International Conference On Neural Networks (ICNN97), Houston, TX, USA, 12 June 1997; pp. 1465–1468. [Google Scholar]
  7. Liu, J.; Wen, X.; Meng, Q.; Xu, H.; Yuan, L. Synthetic aperture radar image segmentation with reaction diffusion level set evolution equation in an active contour model. Remote Sens. 2018, 10, 906. [Google Scholar] [CrossRef] [Green Version]
  8. Lu, Y.; Miao, J.; Duan, L.; Qiao, Y.; Jia, R. A new approach to image segmentation based on simplified region growing PCNN. Appl. Math. Comp. 2008, 205, 807–814. [Google Scholar] [CrossRef]
  9. Waldemark, K.; Lindblad, T.; Becanovic, V.; Guillen, J.; Klingner, P. Patterns from the sky. Satellite image analysis using pulse coupled neural networks for pre-processing, segmentation and edge detection. Pattern Recognit. Lett. 2000, 21, 227–237. [Google Scholar] [CrossRef]
  10. Wang, D.; Terman, D. Image segmentation based on oscillatory correlation. Neural Comput. 1997, 9, 805–836. [Google Scholar] [CrossRef] [Green Version]
  11. Wen, W.; He, C.; Zhang, Y.; Fang, Z. A novel method for image segmentation using reaction-diffusion model. Multidimens. Syst. Signal Process. 2017, 28, 657–677. [Google Scholar] [CrossRef]
  12. Yuan, Y.; He, C. Adaptive active contours without edges. Math. Comput. Model. 2012, 55, 1705–1721. [Google Scholar] [CrossRef]
  13. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. SMC 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  14. Kapur, J.; Sahoo, P.; Wong, A. A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vis. Graph. Image Process. 1985, 29, 273–285. [Google Scholar] [CrossRef]
  15. Walt, S.; Schönberger, J.; Nunez-Iglesias, J.; Boulogne, F.; Warner, J.; Yager, N.; Gouillart, E.; Al, T. Scikit-image: Image processing in Python. PeerJ 2014, 2, e453. [Google Scholar] [CrossRef]
  16. Nagler, T.; Rott, H.; Ripper, E.; Bippus, G.; Hetzenecker, M. Advancements for Snowmelt Monitoring by Means of Sentinel-1 SAR. Remote Sens. 2016, 8, 348. [Google Scholar] [CrossRef] [Green Version]
  17. Baghdadi, N.; Gauthier, Y.; Bernier, M. Capability of multitemporal ERS-1 SAR data for wet-snow mapping. Remote Sens. Environ. 1997, 60, 174–186. [Google Scholar] [CrossRef]
  18. Magagi, R.; Bernier, M. Optimal conditions for wet snow detection using RADARSAT SAR data. Remote Sens. Environ. 2003, 84, 221–233. [Google Scholar] [CrossRef]
  19. Nagler, T.; Rott, H. Retrieval of wet snow by means of multitemporal SAR data. IEEE Trans. Geosci. Remote Sens. 2000, 38, 754–765. [Google Scholar] [CrossRef]
  20. Marin, C.; Bertoldi, G.; Premier, V.; Callegari, M.; Brida, C.; Hürkamp, K.; Tschiersch, J.; Zebisch, M.; Notarnicola, C. Use of Sentinel-1 radar observations to evaluate snowmelt dynamics in alpine regions. Cryosphere 2020, 14, 935–956. [Google Scholar] [CrossRef] [Green Version]
  21. Lievens, H.; Demuzere, M.; Marshall, H.; Reichle, R.; Brucker, L.; Rosnay, P.; Dumont, M.; Girotto, M.; Immerzeel, W.; Jonas, T. Al Snow depth variability in the Northern Hemisphere mountains observed from space. Nat. Commun. 2019, 10, 4629. [Google Scholar] [CrossRef] [Green Version]
  22. Tsai, Y.; Dietz, S.; Oppelt, A.; Kuenzer, N. Remote Sensing of Snow Cover Using Spaceborne SAR: A Review. Remote Sens. 2019, 11, 1456. [Google Scholar] [CrossRef] [Green Version]
  23. Besic, N.; Vasile, G.; Dedieu, J.; Chanussot, J.; Stankovic, S. Stochastic approach in wet snow detection using multitemporal SAR data. IEEE Geosci. Remote Sens. Lett. 2015, 12, 244–248. [Google Scholar] [CrossRef] [Green Version]
  24. Haefner, H.; Seidel, K. Applications of snow cover mapping in high mountain regions. Phys. Chem. Earth 1997, 22, 275–278. [Google Scholar] [CrossRef]
  25. Solberg, R.; Amlien, J.; Koren, H.; Eikvil, L.; Malnes, E.; Storvold, R. Multi-sensor and time-series approaches for monitoring of snow parameters. In Proceedings of the Geoscience and Remote Sensing Symposium, 2004, IGARSS’04, Proceedings, 2004 IEEE International, Anchorage, AK, USA, 20–24 September 2004; Volume 3, pp. 1661–1666. [Google Scholar]
  26. Karbou, F.; Veyssière, G.; Coleou, C.; Dufour, A.; Gouttevin, I.; Durand, P.; Gascoin, S.; Grizonnet, M. Monitoring Wet Snow Over an Alpine Region Using Sentinel-1 Observations. Remote Sens. 2021, 13, 381. [Google Scholar] [CrossRef]
  27. Rott, H.; Davis, R. Multifrequency and Polarimetric SAR Observations on Alpine Glaciers. Ann. Glaciol. 1993, 17, 98–104. [Google Scholar] [CrossRef] [Green Version]
  28. Baghdadi, N.; Gauthier, Y.; Bernier, M.; Fortin, J. Potential and Limitations of RADARSAT SAR Data for Wet Snow Monitoring. IEEE Trans. Geosci. Remote Sens. 2000, 38, 316–320. [Google Scholar] [CrossRef]
  29. Stoffel, M.; Corona, C. Future winters glimpsed in the Alps. Nat. Geosci. 2018, 11, 458–460. [Google Scholar] [CrossRef]
  30. Goetz, D. Bilan Nivo-Météorologique de l’hiver 2017–2018. Revue De L’ANENA. Technical Report. 2018. Available online: https://www.anena.org/5042-la-revue-n-a.htm (accessed on 1 January 2012).
  31. Kropatsch, W.; Strobl, D. The generation of SAR layover and shadow maps from digital elevation models. IEEE Trans. Geosci. Remote Sens. 1990, 28, 98–107. [Google Scholar] [CrossRef]
  32. Gelautz, M.; Frick, H.; Raggam, J.; Burgstaller, J.; Leberl, F. SAR image simulation and analysis of alpine terrain. ISPRS J. Photogramm. Remote Sens. 1998, 53, 17–38. [Google Scholar] [CrossRef]
  33. Yu, J.J.; Quegan, S. Multi-channel filtering of SAR images. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2373–2379. [Google Scholar]
  34. Gascoin, S.; Grizonnet, M.; Bouchet, M.; Salgues, G.; Hagolle, O. High-Resolution Operational Snow Cover Maps from Sentinel-2 and Landsat-8 Data. Earth Syst. Sci. Data 2019, 11, 493–514. [Google Scholar] [CrossRef] [Green Version]
  35. Chan, T.; Vese, L. Active Contours without Edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [Green Version]
  36. Grady, L. Random Walks for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1768–1783. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  38. Karas, A.; Karbou, F.; Giffard-Roisin, S.; Durand, P.; Eckert, N. Automatic Color Detection-Based Methods Applied to Sentinel-1 SAR Images for Snow Avalanche Debris Monitoring. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–17. [Google Scholar] [CrossRef]
  39. Karbou, F.; Gouttevin, I.; Durand, P. Spatial and temporal variability of wet snow in the French mountains using a color-space based segmentation technique on Sentinel-1 SAR images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 5586–5588. [Google Scholar]
  40. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Karbou, F.; James, G.; Fructus, M.; Marti, F. On the Evaluation of the SAR-Based Copernicus Snow Products in the French Alps. Geosciences 2022, 12, 420. [Google Scholar] [CrossRef]
  42. Karbou, F.; James, G.; Durand, P.; Atto, A. Thresholds and Distances to Better Detect Wet Snow over Mountains with Sentinel-1 Image Time Series. In Change Detection And Image Time Series Analysis 1: Unsupervised Methods; John Wiley ISTE Ltd.: Hoboken, NJ, USA, 2021; Chapter 5. [Google Scholar] [CrossRef]
  43. Baghdadi, N.; Bazzi, H.; El Hajj, M.; Zribi, M. Detection of Frozen Soil Using Sentinel-1 SAR Data. Remote Sens. 2018, 10, 1182. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Location of the study area, French alpine massif delimitation and MGRS Tiles.
Figure 1. Location of the study area, French alpine massif delimitation and MGRS Tiles.
Geosciences 13 00193 g001
Figure 2. Flowchart of the preprocessing of Sentinel-1 images using S1-Tiling chain developed by the CNES radar service.
Figure 2. Flowchart of the preprocessing of Sentinel-1 images using S1-Tiling chain developed by the CNES radar service.
Geosciences 13 00193 g002
Figure 3. Snow limit for the northern orientations for six French massifs (Belledonne, Chartreuse, Bauges, Maurienne, Vanoise, Grandes-Rousses and Beaufortin) as derived from the BERA bulletins for the 2017–2018 season.
Figure 3. Snow limit for the northern orientations for six French massifs (Belledonne, Chartreuse, Bauges, Maurienne, Vanoise, Grandes-Rousses and Beaufortin) as derived from the BERA bulletins for the 2017–2018 season.
Geosciences 13 00193 g003
Figure 4. Same as Figure 3 but for snow limits for the southern orientations.
Figure 4. Same as Figure 3 but for snow limits for the southern orientations.
Geosciences 13 00193 g004
Figure 5. Snow–rain limit for the Grandes-Rousses massif as derived from the BERA bulletins for the 2017–2018 season.
Figure 5. Snow–rain limit for the Grandes-Rousses massif as derived from the BERA bulletins for the 2017–2018 season.
Geosciences 13 00193 g005
Figure 6. Simplified diagram representing all segmentation methods tested in this work, as well as the corresponding acronyms used in the paper.
Figure 6. Simplified diagram representing all segmentation methods tested in this work, as well as the corresponding acronyms used in the paper.
Geosciences 13 00193 g006
Figure 7. Training image set for the Random Forest algorithm, without snow edge areas. The images are 100 × 100 pixel sub-areas extracted from the 22 April 2018 SAR image.
Figure 7. Training image set for the Random Forest algorithm, without snow edge areas. The images are 100 × 100 pixel sub-areas extracted from the 22 April 2018 SAR image.
Geosciences 13 00193 g007
Figure 8. Wet snow maps over Belledonne massif using a 2 dB fixed threshold and different Gaussian filtering distances  σ . In addition, tests were performed using late April 2018 data.
Figure 8. Wet snow maps over Belledonne massif using a 2 dB fixed threshold and different Gaussian filtering distances  σ . In addition, tests were performed using late April 2018 data.
Geosciences 13 00193 g008
Figure 9. RGB SAR image composite using VH backscatters over the Belledonne massif (R: 20170825, G: 20180427, B: 20170825) showing areas with wet snow in magenta.
Figure 9. RGB SAR image composite using VH backscatters over the Belledonne massif (R: 20170825, G: 20180427, B: 20170825) showing areas with wet snow in magenta.
Geosciences 13 00193 g009
Figure 10. Flowchart of all processing of Sentinel-1 images with segmentation methods and evaluation of wet snow products.
Figure 10. Flowchart of all processing of Sentinel-1 images with segmentation methods and evaluation of wet snow products.
Geosciences 13 00193 g010
Figure 11. Example of segmentation outputs over the massif of Belledonne using SAR data from 22 April 2018. Results were superposed with Sentinel-2 snow mask (20 April 2018).
Figure 11. Example of segmentation outputs over the massif of Belledonne using SAR data from 22 April 2018. Results were superposed with Sentinel-2 snow mask (20 April 2018).
Geosciences 13 00193 g011
Figure 12. Probability curves describing the occurrence of snow by altitude at the scale of the Grandes Rousses massif, obtained for three Sentinel-2 successive dates (pink curves), for which no cloud was recorded over the Grandes-Rousses massif: (a) 15 April 2018, (b) 20 April 2018 and (c) 25 April 2018. The figure also shows the obtained snow probability curves from the segmentation methods studied in the paper using Sentinel-1 ascending scenes observed on 16, 22 and 28 April 2018.
Figure 12. Probability curves describing the occurrence of snow by altitude at the scale of the Grandes Rousses massif, obtained for three Sentinel-2 successive dates (pink curves), for which no cloud was recorded over the Grandes-Rousses massif: (a) 15 April 2018, (b) 20 April 2018 and (c) 25 April 2018. The figure also shows the obtained snow probability curves from the segmentation methods studied in the paper using Sentinel-1 ascending scenes observed on 16, 22 and 28 April 2018.
Geosciences 13 00193 g012
Figure 13. RMS errors computed using probability curves shown in Figure 12 at different dates of Sentinel-2 and Sentinel-1 observations: 16 April 2018 (diamond symbol), 22 April 2018 (square symbol) and 28 April 2018 (cross symbol). Results are given by altitude range: (top) 0–2000 m, (bottom) 2000–2500 m.
Figure 13. RMS errors computed using probability curves shown in Figure 12 at different dates of Sentinel-2 and Sentinel-1 observations: 16 April 2018 (diamond symbol), 22 April 2018 (square symbol) and 28 April 2018 (cross symbol). Results are given by altitude range: (top) 0–2000 m, (bottom) 2000–2500 m.
Geosciences 13 00193 g013
Figure 14. Same as the Figure 13 but for altitude ranges: (top) 2500–3000 m and (bottom) all altitudes.
Figure 14. Same as the Figure 13 but for altitude ranges: (top) 2500–3000 m and (bottom) all altitudes.
Geosciences 13 00193 g014
Figure 15. (a) Correlation scores between wet snow segmentations and target snow products (Sentinel-2 as close in time as 5 days) over the 31TGL tile and the entire season. Results are given for 7 segmentation methods. (b) same as (a) but for Hamming distance, (c) same as (a) but for Heidke Skill Score.
Figure 15. (a) Correlation scores between wet snow segmentations and target snow products (Sentinel-2 as close in time as 5 days) over the 31TGL tile and the entire season. Results are given for 7 segmentation methods. (b) same as (a) but for Hamming distance, (c) same as (a) but for Heidke Skill Score.
Geosciences 13 00193 g015aGeosciences 13 00193 g015b
Figure 16. Altitude-Time diagram over the Belledonne massif with the normalized percentage of wet snow-covered pixels by class of elevation and time for all Sentinel-1 ascending images. Results are given for sourthern orientations. Results are given for 4 segmentation methods: 2 dBF, RF1, MF and RW. When available, the altitude of Snow–Rain lines (or range of altitudes) are displayed as red square symbols. The snow lines, as well as the isotherm 0 line, are displayed as black and red curves, respectively. These estimates were taken from Météo-France forecast bulletins (see Figure 4).
Figure 16. Altitude-Time diagram over the Belledonne massif with the normalized percentage of wet snow-covered pixels by class of elevation and time for all Sentinel-1 ascending images. Results are given for sourthern orientations. Results are given for 4 segmentation methods: 2 dBF, RF1, MF and RW. When available, the altitude of Snow–Rain lines (or range of altitudes) are displayed as red square symbols. The snow lines, as well as the isotherm 0 line, are displayed as black and red curves, respectively. These estimates were taken from Météo-France forecast bulletins (see Figure 4).
Geosciences 13 00193 g016
Figure 17. Same as Figure 16 but for northern orientations.
Figure 17. Same as Figure 16 but for northern orientations.
Geosciences 13 00193 g017
Table 1. Contingency table between the binary optical snow product and the binary SAR based on (output of segmentation).
Table 1. Contingency table between the binary optical snow product and the binary SAR based on (output of segmentation).
Wet Snow by SAR Total Snow by Optical Image
YesNoTotal
Yesaba + b
Nocdc + d
Totala + cb + da + b + c + d = n
Table 2. Segmentation scores of the 22 April 2018 SAR image, over the entire 31TGL tile. The target image was the Sentinel-2 snow product image from 20 April 2018.
Table 2. Segmentation scores of the 22 April 2018 SAR image, over the entire 31TGL tile. The target image was the Sentinel-2 snow product image from 20 April 2018.
Scores2dB2dBFMMFCVRWRF1RF2OtsuOtsuF
Correlation0.630.800.700.820.770.810.790.830.640.72
Hamming distance0.180.100.150.090.110.090.100.080.180.14
Difference in area (%)−7.88−19.09−21.38−8.22−11.62−13.4−20.37−4.90−31.24−32.02
Struct. Similarity (%)50.1984.7662.4785.4678.2685.8384.4785.5759.4481.60
True Detections (%)74.3578.1971.2885.2480.782.1377.2087.4962.8066.85
False Alarms (%)12.51.915.164.605.413.141.715.354.191.66
Heidke skill scores0.630.790.680.820.770.810.780.830.610.69
Table 3. Segmentation scores of the Melt season (from 10 April to 17 June 2018) over the entire 31TGL tile. The target images are Sentinel-2 snow products (we allowed up to 5 days of difference in observation dates).
Table 3. Segmentation scores of the Melt season (from 10 April to 17 June 2018) over the entire 31TGL tile. The target images are Sentinel-2 snow products (we allowed up to 5 days of difference in observation dates).
Scores2dB2dBFMMFCVRWRF1RF2OtsuOtsuF
Correlation0.500.770.600.710.560.770.780.720.510.72
Hamming distance0.200.080.130.110.220.080.080.110.210.10
Difference in area (%)62.90−10.1411.7233.95151.531.41−14.7331.5694.43−29.01
Struct. Similarity (%)55.4990.1666.8486.8866.2990.6390.1885.1058.9389.29
True Detections (%)74.2578.1870.7987.9186.3383.1076.3189.1169.9565.83
False Alarms (%)17.893.178.1110.5321.974.802.4711.3014.696.89
Heidke skill scores0.470.770.590.690.520.770.770.700.480.70
Runtime (s)2.699.8044.7667.631893.42266.872.71118.2743.8540.54
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guiot, A.; Karbou, F.; James, G.; Durand, P. Insights into Segmentation Methods Applied to Remote Sensing SAR Images for Wet Snow Detection. Geosciences 2023, 13, 193. https://doi.org/10.3390/geosciences13070193

AMA Style

Guiot A, Karbou F, James G, Durand P. Insights into Segmentation Methods Applied to Remote Sensing SAR Images for Wet Snow Detection. Geosciences. 2023; 13(7):193. https://doi.org/10.3390/geosciences13070193

Chicago/Turabian Style

Guiot, Ambroise, Fatima Karbou, Guillaume James, and Philippe Durand. 2023. "Insights into Segmentation Methods Applied to Remote Sensing SAR Images for Wet Snow Detection" Geosciences 13, no. 7: 193. https://doi.org/10.3390/geosciences13070193

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop